Since every trigger in OS Template Linux have already {HOST.NAME} added to the trigger name except items from discovery (ex. disc volumes) I wonder if there is a way to add {HOST.NAME} to all discovered items so zabbix can produce email alerts like:
OK: Free disk space on {HOST.NAME} is less than 20% on volume /app1
instead of:
OK: Free disk space is less than 20% on volume /app1
Kind Regards,
Zabbix 3.4.2
Adding {HOST.NAME} to the template via via the:
Configuration -> Template -> Discovery rule -> Trigger prototype
works
Try this: OK: Free disk space on {HOST.NAME1} is less than 20% on volume /app1
I change {HOST.NAME} to {HOST.NAME1}
Related
I am unable to get the remote command option on zabbix 6 for some reason any ideas?Operation description
if anyone can assist me i have worked through all the user manuals and they don't mention something about requirements on this drop down.
In version 6, commands are available for execution if previously defined in global scripts with Action operation selected as its scope.
In the previous versions, you just needed to select "remote command" as "operation type".
See https://www.zabbix.com/documentation/current/en/manual/config/notifications/action/operation#configuring-an-operation
Zabbix support came back with the answer on this and this is what you need to do.
Please be advised that In Zabbix 6.0 to use Scripts in Operations step you need to create such Scripts in Administration - Scripts section in Zabbix Frontend and set
Scope to Action operation:
enter image description here
Then this Script will be available in your Operations steps:enter image description here
So basically there is a seprate section to go and create your scripts and assign them to the action you want to take.
I have my CronJob working fine without the use of an image stream.
The job runs every 15 minutes and always pulls a tagged image, e.g. my-cron:stable.
Since the image is always pulled and the schedule tells the cluster when to run my job, what do I gain from knowing that there's an updated version of my image?
If the image changes and there's a running instance of my job, I want the job to complete using the current version of the image.
In the next scheduled run the updated image is pulled (AlwaysPull). So it seems I don't gain much tracking changes to an image stream for cron jobs.
ImageStream triggers only BuildConfigs and DeploymentConfigs, as per https://docs.openshift.com/container-platform/4.7/openshift_images/image-streams-manage.html .
Upstream kubernetes doesn't have a concept of ImageStream, so there is no triggering for 'vanilla' resource types. CronJob is used both in openshift and kubernetes (apiVersion: batch/v1beta1), and AFAIK the only way to access an imagestream is to use full path to internal registry, which is not that convenient. Your cronjob won't restart or won't be stopped for some reason, if imagestream is updated, because from kubernetes standpoint the image is pulled only when cronjob has been triggered, and after that it just waits for a job to complete.
As i see it - you are not gaining much from using imagestreams, because one of the main points, ability to use triggers, is not usable for cronjobs. The only reason to use it in CronJobs is if you are pushing directly to internal registry for some reason, but that's a bad practice too.
See following links for reference:
https://access.redhat.com/solutions/4815671
How to specify OpenShift image when creating a Job
Quoting redhat solution here:
Resolution
When using an image stream inside the project to run a cronjob,
specify the full path of the image:
[...]
spec:
jobTemplate:
spec:
template:
spec:
containers:
image: docker-registry.default.svc:5000/my-app-namespace/cronjob-image:latest
name: cronjob-image
[...]
Note that you can also put the ':latest' (or a specific tag) after the
image.
In this example, the cronjob will use the imagestream cronjob-image
from project my-app-namespace:
$ oc get is -n my-app-namespace [...]
imagestream.image.openshift.io/cronjob-image
docker-registry.default.svc:5000/my-app-namespace/cronjob-image
latest 27 minutes ago
Root Cause
The image was specified without its full path to the internal docker
registry. If the full path is not used (i.e. putting only
cronjob-image, OpenShift won't be able to find it.[...]
By using an ImageStream reference, you can avoid having to include the container image repository hostname hostname and port, and the project name in your Cron Job definition.
The docker repository reference looks likes this:
image-registry.openshift-image-registry.svc:5000/my-project/my-is:latest
The value of the equivalent annotation placed on a Cron Job looks like this:
[
{
"from": {
"kind": "ImageStreamTag",
"name": "my-is:latest"
},
"fieldPath": "spec.jobTemplate.spec.template.spec.containers[?(#.name==\"my-container\")].image"
}
]
On the one hand, this is longer. On the other hand, it includes less redundant information.
So, compared to other types of kubernetes resources, Image Streams don't add a great deal of functionality to Cron Jobs. But you might benefit from not having to hardcode the project name if for instance you kept the Cron Job YAML in Git and wanted to apply it to several different projects.
Kubernetes-native resources which contain a pod can be updated automatically in response to an image stream tag update by adding the image.openshift.io/triggers annotation.
This annotation can be placed on CronJobs, Deployments, StatefulSets, DaemonSets, Jobs, ReplicationControllers, etc.
The easiest way to do this is with the oc command.
$ oc set triggers cronjob/my-cronjob
NAME TYPE VALUE AUTO
cronjobs/my-cronjob config true
$ oc set triggers cronjob/my-cronjob --from-image=my-is:latest -c my-container
cronjob.batch/my-cronjob updated
$ oc set triggers cronjob/my-cronjob
NAME TYPE VALUE AUTO
cronjobs/my-cronjob config true
cronjobs/my-cronjob image my-is:latest (my-container) true
The effect of the oc set triggers command was to add the annotation to the CronJob, which we can examine with:
$ oc get cronjob/my-cronjob -o json | jq '.metadata.annotations["image.openshift.io/triggers"]' -r | jq
[
{
"from": {
"kind": "ImageStreamTag",
"name": "my-is:latest"
},
"fieldPath": "spec.jobTemplate.spec.template.spec.containers[?(#.name==\"my-container\")].image"
}
]
This is documented in Images - Triggering updates on image stream changes - but the syntax in the documentation appears to be wrong, so use oc set triggers if you find that the annotation you write by hand doesn't work.
Could it be possible for already created (Yarn/Hadoop) cluster to disable logging for all servers inside ?
I can't find anything like it. Is there anything in Dataproc or Compute Engine which can help me to disable the logs ?
One easy way would be to create an exclusion in Stackdriver Logging that would prevent logs from that cluster from being ingested into Stackdriver.
You can create a resuorce based exclusion in Stacdriver - select a DataProc cluster you want and it will stop collecting any logs - hence bill you for that.
Go to Logs Ingestion page, select Exclusions and click blue button "create exclusion".
As a resource type select "Cloud Dataproc Cluster" > your_cluster_name > All cluster_uuid as shown below. Also - select "no limit" for time frame.
Fill the "Name" field on the right and again click blue button "Create Exlusion".
You can create up to 50 exclusion queries in StackDriver.
With little help and suggestion from Google support, there is complete solution to skip logging for whole yarn/hadoop cluster.
That can be possible only when create new cluster from dataproc either by google cloud page or console.
Property which should to be set in cluster properties filed:
dataproc:dataproc.logging.stackdriver.enable to be false
More info at: https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/cluster-properties
If you create cluster through console you can referral https://cloud.google.com/sdk/gcloud/reference/dataproc/clusters/create#--properties and to use command like:
gcloud.cmd --properties 'dataproc:dataproc.logging.stackdriver.enable=false'
When I try to load a big CSV from a zip file, the execution log give me the following error:
----------------------------------------- Error details ------------------------------------------
Component [Clientes:CLIENTES1] finished with status ERROR.
The size of data buffer is only 100663296. Set appropriate parameter in defaultProperties file.
--------------------------------------------------------------------------------------------------
How can I set the appropriate parameter in defaultProperties file?
I tried this link, but my cloudconnect run configurations page is different from the link:
I've created the parameters file and filled the additional parameters with the right values like said the tutorial (code bellow) and the same error appear in the screen.
Name: -config; Value: new_buffer_size.txt
The new_buffer_size.txt content have just this line:DEFAULT_INTERNAL_IO_BUFFER_SIZE = 200000000
How can I solve this problem? I need to solve this before the world explodes.
CloudConnect is designed to develop ETL(s), which can be run on GoodData cloud workers and therefore some lower level settings are surpassed as in this case. The only legitimate way is to modify the ETL the way it can process the data with current settings. Regarding to docs, the referenced article is outdated. GoodData docs team is aware if it and they are preparing docs refactoring.
Note: As you have probably noticed, CloudConnect is being powered by Javlin's Clover ETL, therefore feel free to check their forums, as you would find there how to overcome the issue on lower level (no UI), but it would work only for data processing on the local machine.
We've recently discovered that Xcode Server (i.e. a Bot) will keep all past integrations. (We discovered this as the builds started failing and we realized the CI server was completely out of disk space).
How can you configure a bot (or the server in general) to only keep the last n integrations? Or even the last n days?
If there is no built-in setting, is there a way to accomplish this via a cron job that doesn't have to use the unofficial XCode Server API?
The current max disk size is a ratio of 0.75 of the capacity (if I understand the output well). You can see it for yourself if you run curl -k -u USER:PASS https://localhost:20343/api/settings. You might be able to change it by calling this API as a PATCH request with a modified value for max_percent_disk_usage to something smaller and then giving it time to clean up. I haven't tested that however.
If you're interested in how this works, see /Applications/Xcode.app/Contents/Developer/usr/share/xcs/xcsd/routes/routes_setting.js line 19. From there you should be able to dig deeper and see for yourself.
Hope this helps.
This was very helpful, #czechboy!
The JSON document returned when you fetch the settings will contain the _id of the xcode instance whose settings you wish to modify, and you must send the PATCH request to https://localhost:20343/api/settings/<id>. The body of the request should be something like:
{ "set_props": { "max_percent_disk_usage": 0.40 } }
After doing this I needed to restart the server before old files were cleaned up.