How to find out which terminal is being configured? - configuration

Assume you want to connect your Ubuntu 13.04 desktop computer via TTL-232R-3V3 USB cable to the UART interface of an embedded system running an individual Linux flavor, that does not belong to a major distribution. Your own machine offers you the interface to your connection via /dev/ttyUSB0. Because you are using a framework for a high level language (pySerial) you know that you configure some terminal options via the C-struct termios.
Now the question is, where is that terminal you are configuring? Is that information you send to the remote device and configure that? Or do you simply configure how the /dev/ttyUSB0 interface is interpreted by your system? Or is there maybe even some configuration happening in the logic of the UART-to-USB converter cable? And if all 3 are possible, how would you determine which set of parameters where configured by your termios manipulations on /dev/ttyUSB0?
If it makes things easier to explain, consider the example of LF/CR handling which can contain, depending on the flags you set, either only LF, only CR or both as would be typical for windows. My question is not limited to these options only, though.
Note: I came to that question after I realised that I already saw some options active, that the man page declares as not available in POSIX and Linux.

All the configuration options are settings for the device driver. Most of them are implemented entirely in the driver software, such as echoing, CR-to-LF translation, and raw-vs-cooked mode.
Some of them, such as modes related to RS-232 signals, might be implemented in the device hardware, and the device driver will perform the appropriate device control operations to enable those options.

Related

What is the difference between qemu-sparc64 and qemu-system-sparc64?

I am trying to run some simple SPARC tests on bare metal QEMU. I am using qemu-sparc64 -g 1234 simple_example and seems to be working fine (I can connect gdb to localhost:1234, step through, etc) but was wondering what does qemu-system-sparc64 do ? I tried running it with the same cmd line switches but got some errors. Any help is appreciated, thank you.
For any QEMU architecture target, the qemu-system-foo binary runs a complete system emulation of the CPU and all other devices that make up a machine using that CPU type. It typically is used to run a guest OS kernel, like Linux; it can run other bare-metal guest code too.
The qemu-foo binary (sometimes also named qemu-foo-static if it has been statically linked) is QEMU's "user-mode" or "linux-user" emulation. This expects to run a single Linux userspace binary, and it translates all the system calls that process makes into direct host system calls.
If you're running qemu-sparc64 then you are not running your program in a bare-metal environment -- it's a proper Linux userspace process, even if you're not necessarily using all of the facilities that allows. If you want bare-metal then you need qemu-system-sparc64, but your program needs to actually be compiled to run correctly on the specific machine type that you tell QEMU to emulate (eg the Sun4u hardware, which is the default). Also, by default qemu-system-sparc64 will run the OpenBIOS firmware, so your bare-metal guest code needs to either run under that OpenBIOS environment, or else you need to tell QEMU not to run the BIOS (and then you get to deal with all the hardware setup that the BIOS would do for you).

How to setup and save qemu running option

I'm using qemu to replace bochs (since it doesn't update anymore)
In bochs, I can save the running settings into files and reload it. Furthermore, there will be a listed table of running options while boot up.
I'm wondering if I can do the same with qemu, save running settings such as cpu model, and other stuffs into some files and reload it next time I run emulation.
And if there exists a full listed running option table like thing for me to have a complete view on which options I can set.
Thanks a lot!
For this sort of UI and management of VMs you should look at a "management layer" program that sits on top of QEMU. libvirt's "virt-manager" is one common choice here. A management-layer will generally allow you to define options for a VM and save them so you can start and stop that VM without having to specify all the command line options every time. It will also configure QEMU in a more secure and performant way than you get by default, which often requires rather long QEMU command lines.
QEMU itself doesn't provide this kind of facility because its philosophy is to just be the low-level tool which runs a VM, and leave the UI and persistent-VM-management to other software which can do a better job of it.

How does Dynatrace OneAgent inject into Java

Classical Dynatrace monitoring worked by using an agent for monitoring java processes. You had to add the agent to the monitored VM and it worked.
Dynatrace OneAgent does this without agents. But how does it work. There was no agent added to the Java process. All that is needed is restarting the Java process. Tried it out with Liberty Server and could find two Dynatrace threads called ruxitautosensor and ruxitsubpathsender. But i do not understand how the injection works.
Dynatrace OneAgent changed the "/etc/ld.so.preload" file in OS:
/$LIB/liboneagentproc.so
"/etc/ld.so.preload" and env variable "LD_PRELOAD" are used to preload specified lib when starting new process.
It seems to me they are using standard JVM Tool Interface APIs.
-agentpath:<path-to-agent>=<options> to JVM.
Full documentation here: https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html
Example:
-agentpath:C:/PROGRA~2/DYNATR~1/oneagent/agent/lib64/oneagentloader.dll=isjdwppresent=true,loglevelcon=none,tenant=00000000-0000-0000-0000-000000000000,tenanttoken=XXXXXXXXXXXXXXXX,server=https://10.10.10.10:8443/communication
Note: Some strings have been obfuscated.
On a very high level the installed OS-level agent runs some processes which use OS-level functionality to iterate processes on the machine and inject the agent via various different techniques into all the technologies that are supported for "deep monitoring", e.g. Java, .NET and a number of others.
More details are likely not published for obvious reasons as all this gives a clear advantage compared to the traditional approach for injecting agents manually via adjusting startup scripts, especially if you are deploying into a very large environment.

Capture Wifi packets using libpcap

I'm using libpcap to write a sniffer program .. For starters i referred to tutorials on the net from various programmer on how to write Basic Sniffer program using Libpcap .. which captures Packets only from the Ethernet connection ...
And i've been searching a lot on How to write a program using libpcap to capture packets from the wifi connection .... but i'm not getting anything which can help me ...
Do i need to do some settings in my system to make sure that libpcap can capture the packet...because the method pcap_lookupdev points to the default device which is eth0
You either need to hardcode the name of the Wi-Fi device (probably wlan0) into your program, or give it a UI option (command-line flag, etc.) to let the user specify the device on which to capture traffic.
There is no setting on your system that will change the device returned by pcap_lookupdev().
Tcpdump and Wireshark/TShark/etc. have a -i command-line option to specify the device on which to capture, and Wireshark has a GUI dialog to allow the user to specify it. They don't rely on pcap_lookupdev() if the user specifies the device explicitly.
Note that if you're capturing on Wi-Fi, you will, by default, only capture traffic to and from your machine. If you want to capture all the traffic on your network, including traffic to and from other machines, you will need to capture in monitor mode; newer versions of libpcap have APIs to support that, but they're only guaranteed to work on OS X (for various complicated reasons, they may or may not work on Linux, which, given the device name eth0, I assume you're using; until that's fixed, you'd need to use something such as aircrack-ng to turn on monitor mode - see the WLAN capture setup page section on Linux in the Wireshark Wiki for informationon that).

How to run OpenERP 6.1 Web on a different machine

How do I run OpenERP Web 6.1 on a different machine than OpenERP server?
In 6.0 this was easy, there were 2 config files and 2 servers (server and "web client") and they communicated over TCP/IP.
I am not sure how to setup something similar for 6.1.
I was not able to find helpful documentation on this subject. Do they still communicate over TCP/IP? How do I configure the "web client" to use a different server machine? I would like to understand the new concept here.
tl;dr answer
It's meant only for debugging, but you can.
Use the openerp-web startup script that is included in the openerp-web project, which you can install from the source. There's no separate installer for it, as it's not meant for production. You can pass parameters to set the remote OpenERP server to connect to, e.g. --server-host, --server-port, etc. Use --help to see the options.
Long answer
OpenERP 6.1 comes with a series of architectural changes that allow:
running many OpenERP server processes in parallel, thanks to improved statelessness. This makes distributed deployment a breeze, and gives load-balancing/fail-over/high-availability capabilities. It also allows OpenERP to benefit from multi-processor/multi-core hardware.
deploying the web interface as a regular OpenERP module, relieving you from having to deploy and maintain two separate server processes. When it runs embedded the web client can also make direct Python calls to the server API, avoiding unnecessary RPC marshalling, for an extra performance boost.
This change is explained in greater details in this presentation, along with all the technical reasons behind it.
A standalone mode is still available for the web client with the openerp-web script provided in the openerp-web project, but it is meant for debugging purposes rather than production. It runs in mono-thread mode by default (see the --multi-thread startup parameter), in order to serialize all RPC calls and make debugging easier. In addition to being slower, this mode will also break all modules that have a web part, unless all regular OpenERP addons are also copied in the --addons-path of the web process. And even then, some will be broken because they may still partially depend on the embedded mode.
Now if you were simply looking for a distributed deployment model, stop looking: just run multiple OpenERP (server) processes with the full stack. Have a look at the presentation mentioned above to get started with Gunicorn, WSGI, etc.
Note: Due to these severe limitations and its relative uselessness (vs maintenance cost), the standalone mode for the web client has been completely removed (see rev, 3200 on launchpad) in OpenERP 7.0.