How does Dynatrace OneAgent inject into Java - dynatrace

Classical Dynatrace monitoring worked by using an agent for monitoring java processes. You had to add the agent to the monitored VM and it worked.
Dynatrace OneAgent does this without agents. But how does it work. There was no agent added to the Java process. All that is needed is restarting the Java process. Tried it out with Liberty Server and could find two Dynatrace threads called ruxitautosensor and ruxitsubpathsender. But i do not understand how the injection works.

Dynatrace OneAgent changed the "/etc/ld.so.preload" file in OS:
/$LIB/liboneagentproc.so
"/etc/ld.so.preload" and env variable "LD_PRELOAD" are used to preload specified lib when starting new process.

It seems to me they are using standard JVM Tool Interface APIs.
-agentpath:<path-to-agent>=<options> to JVM.
Full documentation here: https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html
Example:
-agentpath:C:/PROGRA~2/DYNATR~1/oneagent/agent/lib64/oneagentloader.dll=isjdwppresent=true,loglevelcon=none,tenant=00000000-0000-0000-0000-000000000000,tenanttoken=XXXXXXXXXXXXXXXX,server=https://10.10.10.10:8443/communication
Note: Some strings have been obfuscated.

On a very high level the installed OS-level agent runs some processes which use OS-level functionality to iterate processes on the machine and inject the agent via various different techniques into all the technologies that are supported for "deep monitoring", e.g. Java, .NET and a number of others.
More details are likely not published for obvious reasons as all this gives a clear advantage compared to the traditional approach for injecting agents manually via adjusting startup scripts, especially if you are deploying into a very large environment.

Related

Zabbix 5.2 storage monitoring less than n GB (OS independent)

I am new to Zabbix. I have created monitoring for the cpu and memory. Now, I want to create a template for monitoring storage less than some GB especially for Windows or Linux which are in my network. But creating an OS independent trigger would be great.
I am following this tutorial but I think this is an older version (i am using 5.2) and the triggers are not shown in my inteface:
https://www.youtube.com/watch?v=PS8nE2Zkal8&t=54s&ab_channel=AigarsKadikis
Is there any easy way to make this happen(maybe importing some files).
I would suggest to take advantage of default pre-made templates officially provided by Zabbix SIA:
https://github.com/zabbix/zabbix/tree/master/templates/os/windows_agent
https://github.com/zabbix/zabbix/tree/master/templates/os/linux
as zabbix agent binaries for Windows and Linux are different, mapping for them needs to happen at host level under zabbix web interface.
Changing trigger trasholds can be done on a template level in autodiscovery triggers section or individually directly on the host entity.
Providing a full-fledged tutotial on Zabbix triggers and treasholds is not in scope of StackOverflow answer.

How we can monitor a service status using Zabbix?

We are using Zabbix for server monitoring and its working fine for system resources like disk, CPU, memory etc.
Now we want to monitor some services also whether they are running fine or not like Apache, Nginx, Puma, Sidekiq etc.
Can you please help me how we can monitor such services using Zabbix?
Any guidance will be appreciated.
Thanks in advance.
You should refer to the documentation, it covers windows service monitoring and generic process monitoring with proc.* items.
Here you can find the supported item by platform matrix.
There's an external template for systemd lld, you can find it on Zabbix Share
for Nginx monitoring you can use that template
also take a look this repository, probably you can find there something useful
For sidekiq specifically, using
proc.num[,,,sidekiq]
seems to work. It uses the cmdline -argument.
Source:
https://zabbix-users.narkive.com/EKVrN9VY/proc-num-item-for-sidekiq-process

How to find out which terminal is being configured?

Assume you want to connect your Ubuntu 13.04 desktop computer via TTL-232R-3V3 USB cable to the UART interface of an embedded system running an individual Linux flavor, that does not belong to a major distribution. Your own machine offers you the interface to your connection via /dev/ttyUSB0. Because you are using a framework for a high level language (pySerial) you know that you configure some terminal options via the C-struct termios.
Now the question is, where is that terminal you are configuring? Is that information you send to the remote device and configure that? Or do you simply configure how the /dev/ttyUSB0 interface is interpreted by your system? Or is there maybe even some configuration happening in the logic of the UART-to-USB converter cable? And if all 3 are possible, how would you determine which set of parameters where configured by your termios manipulations on /dev/ttyUSB0?
If it makes things easier to explain, consider the example of LF/CR handling which can contain, depending on the flags you set, either only LF, only CR or both as would be typical for windows. My question is not limited to these options only, though.
Note: I came to that question after I realised that I already saw some options active, that the man page declares as not available in POSIX and Linux.
All the configuration options are settings for the device driver. Most of them are implemented entirely in the driver software, such as echoing, CR-to-LF translation, and raw-vs-cooked mode.
Some of them, such as modes related to RS-232 signals, might be implemented in the device hardware, and the device driver will perform the appropriate device control operations to enable those options.

How to run OpenERP 6.1 Web on a different machine

How do I run OpenERP Web 6.1 on a different machine than OpenERP server?
In 6.0 this was easy, there were 2 config files and 2 servers (server and "web client") and they communicated over TCP/IP.
I am not sure how to setup something similar for 6.1.
I was not able to find helpful documentation on this subject. Do they still communicate over TCP/IP? How do I configure the "web client" to use a different server machine? I would like to understand the new concept here.
tl;dr answer
It's meant only for debugging, but you can.
Use the openerp-web startup script that is included in the openerp-web project, which you can install from the source. There's no separate installer for it, as it's not meant for production. You can pass parameters to set the remote OpenERP server to connect to, e.g. --server-host, --server-port, etc. Use --help to see the options.
Long answer
OpenERP 6.1 comes with a series of architectural changes that allow:
running many OpenERP server processes in parallel, thanks to improved statelessness. This makes distributed deployment a breeze, and gives load-balancing/fail-over/high-availability capabilities. It also allows OpenERP to benefit from multi-processor/multi-core hardware.
deploying the web interface as a regular OpenERP module, relieving you from having to deploy and maintain two separate server processes. When it runs embedded the web client can also make direct Python calls to the server API, avoiding unnecessary RPC marshalling, for an extra performance boost.
This change is explained in greater details in this presentation, along with all the technical reasons behind it.
A standalone mode is still available for the web client with the openerp-web script provided in the openerp-web project, but it is meant for debugging purposes rather than production. It runs in mono-thread mode by default (see the --multi-thread startup parameter), in order to serialize all RPC calls and make debugging easier. In addition to being slower, this mode will also break all modules that have a web part, unless all regular OpenERP addons are also copied in the --addons-path of the web process. And even then, some will be broken because they may still partially depend on the embedded mode.
Now if you were simply looking for a distributed deployment model, stop looking: just run multiple OpenERP (server) processes with the full stack. Have a look at the presentation mentioned above to get started with Gunicorn, WSGI, etc.
Note: Due to these severe limitations and its relative uselessness (vs maintenance cost), the standalone mode for the web client has been completely removed (see rev, 3200 on launchpad) in OpenERP 7.0.

Can a TeamCity build agent be configured to only run builds with a particular parameter dependency?

I have a TeamCity build agent installed on a machine which in theory is dedicated to running dynamic security scans and I don't want it doing anything else (i.e. running the duplicates finder).
Short of either creating custom agent configuration properties then customising each build's agent dependencies (which perhaps strictly speaking I should be doing anyway) or configuring the agent to only run selected configurations, is there any way to avoid this? Both of these approaches require additional configuration on a per-build basis either on every single build.
In a perfect world, I'd like to be able to tell the agent to only ever run builds which match a particular agent dependency. Is this possible or am I coming at it from the wrong direction?
I'm afraid TeamCity doesn't provide a way to specify that agent can run only configurations with a specific property (and not run other configurations).
So, there are only two ways to specify agents: either with agent requirements, or with configuring the agent to only run selected configurations.
You could probably try to make some batch change in your build configuration properties, because all build configuration settings/properties are stored in XML files on disk.
In current versions of TeamCity (e.g. 8.1) you can create a pool just for your security machine, and only assign the one machine to that pool, remembering to remove it from other pools.
Then you can assign the security project to that pool. That should solve your problem.