How to disable logs for whole cluster in gce - google-compute-engine

Could it be possible for already created (Yarn/Hadoop) cluster to disable logging for all servers inside ?
I can't find anything like it. Is there anything in Dataproc or Compute Engine which can help me to disable the logs ?

One easy way would be to create an exclusion in Stackdriver Logging that would prevent logs from that cluster from being ingested into Stackdriver.

You can create a resuorce based exclusion in Stacdriver - select a DataProc cluster you want and it will stop collecting any logs - hence bill you for that.
Go to Logs Ingestion page, select Exclusions and click blue button "create exclusion".
As a resource type select "Cloud Dataproc Cluster" > your_cluster_name > All cluster_uuid as shown below. Also - select "no limit" for time frame.
Fill the "Name" field on the right and again click blue button "Create Exlusion".
You can create up to 50 exclusion queries in StackDriver.

With little help and suggestion from Google support, there is complete solution to skip logging for whole yarn/hadoop cluster.
That can be possible only when create new cluster from dataproc either by google cloud page or console.
Property which should to be set in cluster properties filed:
dataproc:dataproc.logging.stackdriver.enable to be false
More info at: https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/cluster-properties
If you create cluster through console you can referral https://cloud.google.com/sdk/gcloud/reference/dataproc/clusters/create#--properties and to use command like:
gcloud.cmd --properties 'dataproc:dataproc.logging.stackdriver.enable=false'

Related

How to add a custom log with az cli?

In the docs, it shows how to create a table, but I see no parameter for setting the collection paths for custom logs (ex: /etc/log/nginx/error.log) the way you can in the portal.
az monitor log-analytics workspace table create --name
--resource-group
--workspace-name
[--columns]
[--description]
[--no-wait]
[--plan {Analytics, Basic}]
[--retention-time]
[--total-retention-time]
When I use show on a current table, I also don't see any collection path parameters or links to other objects where that might be stored.
As far as I know and as per this Git Hub document, adding custom logs using Azure CLI is still a feature request.
#LawrenceLLo AFAIK, Azure CLI currently doesn't support the above scenario. If this is something you would like to see supported, kindly share the feedback directly with the feature owner using this link.
Looks like there is already a feature request is in place, I would suggest you to Upvote and make a comment. Engineering will monitor this product feedback actively.
https://feedback.azure.com/d365community/idea/579dea67-2125-ec11-b6e6-000d3a4f09d0

Zabbix 6 Remote command not available

I am unable to get the remote command option on zabbix 6 for some reason any ideas?Operation description
if anyone can assist me i have worked through all the user manuals and they don't mention something about requirements on this drop down.
In version 6, commands are available for execution if previously defined in global scripts with Action operation selected as its scope.
In the previous versions, you just needed to select "remote command" as "operation type".
See https://www.zabbix.com/documentation/current/en/manual/config/notifications/action/operation#configuring-an-operation
Zabbix support came back with the answer on this and this is what you need to do.
Please be advised that In Zabbix 6.0 to use Scripts in Operations step you need to create such Scripts in Administration - Scripts section in Zabbix Frontend and set
Scope to Action operation:
enter image description here
Then this Script will be available in your Operations steps:enter image description here
So basically there is a seprate section to go and create your scripts and assign them to the action you want to take.

Export Cloud Logging to Big Query Auto-Parse into StructPayload

Per the google docs (https://cloud.google.com/logging/docs/export/using_exported_logs#log_entries_in_google_bigquery) I have set up my GCP app engine to auto-export to big query. However, I am running nodejs using bunyan. My logs are in json format. I'd like to take advantage of the cloud logging "structPayload" LogEntry, but the auto-export seems to automatically dump it into a "textPayload". Is there any way to configure this?
I'm one of the engineers working on Cloud Logging. We haven't yet announced the structured logging feature, and documentation will be available when we do, but the functionality is present in the cloud logging plugin and can be used.
In your case, if you edit the configuration file that is capturing your logs (under /etc/google-fluentd/config.d/), configure 'format json', then 'service google-fluentd reload', you should see your logs ingested as structPayload - each json field will become a column in BigQuery.
See the tail input plugin documentation for more details on the configuration options: http://docs.fluentd.org/articles/in_tail

Cannot create a Google Compute VM instance

I thought that this would be relatively straight forward, but I cannot start a Google Compute Engine instance at all. I am creating an instance through the web interface, but get an error after clicking the "Create" button.
The error that appears in the activity log is:
Invalid value for field 'resource.type':
'https://www.googleapis.com/compute/v1/projects//zones/asia-east1-b/diskTypes/pd-standard'.
Must be a URL to a valid Compute resource of the correct type.
Here is a screen shot of my instance settings:
Any ideas about what is going wrong? I have tried different zones and VM sizes.
Not sure why you're unable to create the instance, but you can always create it directly from the terminal with gcutil.
This command should do the trick: gcutil addinstance --zone=asia-east1-b --image=debian-7-wheezy-v20140606 --machine_type=g1-small bigquery-bi-instance

How can I prevent GCE from copying ssh keys to all new instances?

When I create a new VM instance via Cloud Console, homedirs are automatically created for users that I have created manually on previous instances, and ssh-keys are copied to ~/.ssh/authorized_keys in respective homedirs.
I don't want that! This is IMHO a serious security flaw.
I don't want any users automatically created, I don't want any ssh keys automatically copied.
How can I achieve that?
You can specify the specific users & SSH keys to use for an instance by setting the instance level sshKeys metadata key. You can also do this from the command line using gcutil's --authorized_ssh_keys option:
$ gcutil addinstance --authorized_ssh_keys=username1:/path/to/keyfile1,username2:/path/to/keyfile2,...
If you want to make sure that no instances get the full set of users/keys, you can remove the sshKeys project level metadata key. From the Console, click Compute Engine, then Metadata, then click the trash can icon next to the sshKeys key. You will then need to specify keys for each instance, or you will not be able to log in at all. (which may be what you want in a fully automated environment)
Note: Running gcutil ssh will generate a key-pair (if needed) and add it to the sshKeys key.
Google adds these ssh keys to the project ssh-keys automatically. So you need to block project-wide SSH keys: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#block-project-keys
You can do it via meta-data:
"block-project-ssh-keys": "true"