Is there a way to create a problem metric in Dynatrace using a shell script that can be executed from the Linux server?
Here, Problem metric means,
Let's assume that we are using a shell script to check the status of deployed services on the Linux Server.
Then,
That Shell Script should be able to be called by Dynatrace
And, based on Shell Script's response, should be able to create Problem.
What do you mean by 'problem metric'?
You can create metrics via the Metric API and Problems via the Events API
You can call either endpoint from a shell script on linux. If there is a OneAgent on the system you could also use an extension.
Related
I have configured Azure Release pipeline for my deployment. But I want to run Mysql scripts on Mysql Sever using azure devops tasks, can someone help me if there is a best way to run the scripts ?
0- what task should I use from azure marketplace ?
1- should I run all scripts in one task or each script as a separate task ?
2- how to wait while script is running ?
MySQL Toolkit for Windows is a VSTS / TFS extension and contains helpful tasks for build and release definition for MySQL servers. You can run ad-hoc MySQL command, script or scripts collection on Windows Agents including Windows Hosted Agents (Linux Agents not supported).
In addition, you can add a step to run a PowerShell/batch script to execute the SQL script, and you can also create a custom build task and publish it to VSTS.
BTW, you could add PowerShell Sleep function to wait while script is running.
Update>>You could use Copy Files task to copy files from a source folder to a target folder using match patterns.
I use the following command in my Compute Engine to run a script that's stored in Cloud Storage:
gsutil cat gs://project/folder/script.sh | sh
I want to create a function that runs this command and eventually schedule to run this function, but I don't know how to do this. Does anyone know how to do this?
Cloud Functions is serverless and you can't manage the runtime environment. You don't know what is installed on the runtime environment of the Cloud Functions and your can't assume that GCLOUD exists.
The solution is to use cloud Run. the behavior is very close to Cloud Functions, simply wrap your function on a webserver (I wrote my first article on that) and, in your container, install what you want, especially GCLOUD SDK (you can also use a base image with GCLOUD SDK already installed). And this time you will be able to call system binaries, because you know that they exist because you installed them!
Anyway, be careful in your script execution: the container is immutable, you can't change file, the binaries, the stored files,... I don't know the content of your script but your aren't on a VM, you are still on serverless environment, with ephemeral runtime.
I am looking to run a script once during VM instantiation. The startup script in the compute engine template runs every time the VM is started. Say for e.g. I have to install gnome desktop on linux host, I don't want to include that in startup script. Rather I am looking for something that runs once whet he host is created. Of course, I want this automated. Is it possible to do this?
Edit: I am trying to achieve this in Linux OS.
As the documentation [1], if we create startup scripts on a compute engine instance then the instances perform automated tasks “every time” the instance boots up.
To run startup script once, the most basic way is to use a file on the filesystem to flag when the script has been run or you could use the instance metadata to store the state.
For example via:
INSTANCE_STATE=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/state -H "Metadata-Flavor: Google")
Then set state = PROVISIONED after running the script etc.
But it is a good idea to have your script check specifically if the actions it is going to do have already been done and handled accordingly.
Another option, in your startup script you can have it removed the startup-script metadata at the end from the host instance
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/storing-retrieving-metadata
I want to automate Zabbix server deployment and change default housekeeping parameters from bash script. I want to avoid doing it manualy from Web GUI.
Is there API or some other way to do it?
There is no API support for changing these parameters, but you can change them directly in the database, table config. See the parameters that start with hk_ at http://zabbix.org/wiki/Docs/DB_schema/3.0/config .
I wish to know if its possible to post process traps and events that zabbix server would have received from zabbix agents . I am hoping there is some configuration which I don't know of .
Since you don't give more details, my assumption will be that you want to do something in case of a certain event. Most probably a trigger. Like a service went down or there are too many open connections. These can be handled by using zabbix's Actions to intercept an event.
The following operation depends on what you have to do, it can be a remote command (executed on remote agent) or a script executed by server.
The remote command is a straight forward concept working OOTB. Just follow the manual and howtos.
Running something on the server isn't there, but you can trick zabbix to do just that by using custom alert scripts, which are just scripts launched by the server process. Then create a send message operation that uses your custom alert script and off you go.