Delete all azure event grid subscriptions using Azure CLI - azure-cli

I am using event grid for my web api. The domain name of my api have changed and now I need to update all event grid subscriptions. It so happends that I have Azure CLI command to create each subscription, so the easiest way would be to delete all of them and create new ones. I have checked the docs but az eventgrid event-subscription delete command requires --name parameter which means that I need to execute this manually for each subscriptions. While this is not a huge problem it would require to maintain second command list for deleting. It would be much faster if I could simply say --all or something similar.
Maybe there is a solution to delete all event grid subscriptions without too much of o hassle?
My ideas so far:
Drop entire event grid topic and create new one (seems a bit excessive)
Apply some bash magic with az eventgrid event-subscription list

According to my test, we can use the following command to delete a list of subscriptions that are associated with Azure event gird topic in Azure Cloud Shell.
results=$(az eventgrid event-subscription list --source-resource-id /subscriptions/{SubID}/resourceGroups/{RG}/providers/Microsoft.EventGrid/domains/domain1/topics/topic1 --query "[].{Name:name}")
for row in $(echo "$results" | jq -r '.[]|"\(.Name)"')
do
az eventgrid event-subscription delete --name $row --source-resource-id /subscriptions/{SubID}/resourceGroups/{RG}/providers/Microsoft.EventGrid/domains/domain1/topics/topic1
done

Related

How to add a custom log with az cli?

In the docs, it shows how to create a table, but I see no parameter for setting the collection paths for custom logs (ex: /etc/log/nginx/error.log) the way you can in the portal.
az monitor log-analytics workspace table create --name
--resource-group
--workspace-name
[--columns]
[--description]
[--no-wait]
[--plan {Analytics, Basic}]
[--retention-time]
[--total-retention-time]
When I use show on a current table, I also don't see any collection path parameters or links to other objects where that might be stored.
As far as I know and as per this Git Hub document, adding custom logs using Azure CLI is still a feature request.
#LawrenceLLo AFAIK, Azure CLI currently doesn't support the above scenario. If this is something you would like to see supported, kindly share the feedback directly with the feature owner using this link.
Looks like there is already a feature request is in place, I would suggest you to Upvote and make a comment. Engineering will monitor this product feedback actively.
https://feedback.azure.com/d365community/idea/579dea67-2125-ec11-b6e6-000d3a4f09d0

WVD - az cli sample for creating host pool, workspace and application pool

for an azure WVD deployment, I’d like to automate via az cli the creation of the following elements:
1 Host pool using a w10 image from gallery, automate the join to a domain and configure settings for remote desktop.
1 Workspace
1 Application pool, add some app to the list and authorize one or more AD user to it.
The only available documentation I have found is in https://learn.microsoft.com/en-us/cli/azure/desktopvirtualization?view=azure-cli-latest where there is just a list of available parameters without a detailed how to guide and some E2E sample.
Any advice?
You can refer to this documentation which explains how to do it using PowerShell. It's a pain it's just for one resource but still gives you an idea.
I'd also recommend your first step be to create what you need using the Azure Portal. This article explains how do to it from the portal.
Make sure to note down every field you're filling in, including the fields with default values.
Once you have created all the resources, you can now export an ARM template of the resources you have created, all customisation included. Look under the Automation menu of the resource, and click on Export template. You can use this template to automate your deployment.
Secondly, if you want to consider a different approach using another Infrastructure as Code tool, Terraform supports creating WVD objects. If you are familiar with Terraform, you can check this article which explains how to do it.
Let's assume you still want to proceed with Az Cli. I had a look at the az desktopvirtualization hostpool create help command in my CloudShell, I can see a disclaimer as follows:
Command group 'desktopvirtualization hostpool' is experimental and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Here's a picture for root commands:
You have to bear in mind you will get limited functionality and limited support from Microsoft support/Azure team and possibly other members of the community, until the product at least in Preview. I gave it a try on my end and providing you the code here just to get you going.
Considering your requirements, I've tried to create some commands you can use. Some parameters (the IDs) were a bit vague and I had to look at the ARM template to find out what value I should put. The steps to deploy should be in this sequence.
Create a host pool of virtual machines.
az desktopvirtualization hostpool create --resource-group "myrg"
--host-pool-type "Pooled"
--load-balancer-type "BreadthFirst"
--location westus //only available in certain regions
--name "myhostpool"
--personal-desktop-assignment-type "automatic"
Create application groups.
az desktopvirtualization applicationgroup create --application-group-type "Desktop"
--resource-group "myrg"
--host-pool-arm-path "/subscriptions/<provide_subscriptionID_here>/resourceGroups/myrg/providers/Microsoft.DesktopVirtualization/hostpools/myhostpool"
--location westus
--name "appgroup"
Create workspaces.
az desktopvirtualization workspace create --location westus
--name "myworkspace"
--resource-group "myrg"
--application-group-references "/subscriptions/<provide_subscriptionID_here>/resourcegroups/myrg/providers/Microsoft.DesktopVirtualization/applicationgroups/appgroup"
To conclude, I've probably not spent enough time to really look into how much more it can be automated but I feel like, with the exception of ARM templates, other options will still require a fair bit of manual work.

How to disable logs for whole cluster in gce

Could it be possible for already created (Yarn/Hadoop) cluster to disable logging for all servers inside ?
I can't find anything like it. Is there anything in Dataproc or Compute Engine which can help me to disable the logs ?
One easy way would be to create an exclusion in Stackdriver Logging that would prevent logs from that cluster from being ingested into Stackdriver.
You can create a resuorce based exclusion in Stacdriver - select a DataProc cluster you want and it will stop collecting any logs - hence bill you for that.
Go to Logs Ingestion page, select Exclusions and click blue button "create exclusion".
As a resource type select "Cloud Dataproc Cluster" > your_cluster_name > All cluster_uuid as shown below. Also - select "no limit" for time frame.
Fill the "Name" field on the right and again click blue button "Create Exlusion".
You can create up to 50 exclusion queries in StackDriver.
With little help and suggestion from Google support, there is complete solution to skip logging for whole yarn/hadoop cluster.
That can be possible only when create new cluster from dataproc either by google cloud page or console.
Property which should to be set in cluster properties filed:
dataproc:dataproc.logging.stackdriver.enable to be false
More info at: https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/cluster-properties
If you create cluster through console you can referral https://cloud.google.com/sdk/gcloud/reference/dataproc/clusters/create#--properties and to use command like:
gcloud.cmd --properties 'dataproc:dataproc.logging.stackdriver.enable=false'

How can I prevent GCE from copying ssh keys to all new instances?

When I create a new VM instance via Cloud Console, homedirs are automatically created for users that I have created manually on previous instances, and ssh-keys are copied to ~/.ssh/authorized_keys in respective homedirs.
I don't want that! This is IMHO a serious security flaw.
I don't want any users automatically created, I don't want any ssh keys automatically copied.
How can I achieve that?
You can specify the specific users & SSH keys to use for an instance by setting the instance level sshKeys metadata key. You can also do this from the command line using gcutil's --authorized_ssh_keys option:
$ gcutil addinstance --authorized_ssh_keys=username1:/path/to/keyfile1,username2:/path/to/keyfile2,...
If you want to make sure that no instances get the full set of users/keys, you can remove the sshKeys project level metadata key. From the Console, click Compute Engine, then Metadata, then click the trash can icon next to the sshKeys key. You will then need to specify keys for each instance, or you will not be able to log in at all. (which may be what you want in a fully automated environment)
Note: Running gcutil ssh will generate a key-pair (if needed) and add it to the sshKeys key.
Google adds these ssh keys to the project ssh-keys automatically. So you need to block project-wide SSH keys: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#block-project-keys
You can do it via meta-data:
"block-project-ssh-keys": "true"

Configure or Create hudson job automatically

Is there any way to create new Hudson job by one more Hudson job based one previous Jobs?
For example if I need to create new bunch of jobs one by one, Automatically create 4 jobs with similar configuration with different parameter
Basically steps like this
create SVN branch I can call svn cp command and make it parametrized using script
Create some build based on new svnbranch name
Later tag it
Or other word, I need to clone the previous job and give the new branch name where ever $ Branch comes in new job.
Thanks
You can try the Hudson Remote API for this kind of task (setting up an Hudson project).
See this tutorial for instance, and remember you can display the help quite easily:
java -jar hudson-cli.jar -s http://your_Hudson_server/ help
So, to copy a job:
java -jar hudson-cli.jar -s http://your_Hudson_server/ copy-job myjob copy-myjob
You could use groovy system script like this :
def jenkins = hudson.model.Hudson.instance
def template = jenkins.getItem("MyTemplate")
def job = jenkins.copy(template,"MyNewJob")
job.scm = new hudson.scm.SubversionSCM("http://base/branches/mybranche")
job.save()
Kind of already covered in the other answers, but for an easy way to copy the config.xml over:
curl --user USER:PASS -H "Content-Type: text/xml" -s
--data-binary "#config.xml" "http://hudsonserver:8080/createItem?name=newjobname"
There seems to be a plugin for jenkins.
https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
I have not tested the plug-in yet. But if the plugin works, it should alleviate some of human errors from straight copying a job and modifying variables/values.
def jenkins = hudson.model.Hudson.instance
def template = jenkins.getItem("MyTemplate")
def job = jenkins.copy(template,"MyNewJob")
job.save()
I used this now I have to change the parameter values of MyNewJob using Groovy how will I do that?
ex I have a parameter called "Build_BranchName" and the default is //perforce/mybranch
I have to change it to
//perforce/mynewbranch
You have the option that VonC just gave you (which is probably the safest way but you can also go a different rout by just creating a new directory in {Hudson_Home}\jobs (the directory name will be the job name) and copy a modified config.xml in there. The modification will basically just be the SVN URL. You should check out the xml from the job that you are copying. You need to find out how you change the xml file via script, but this is a secondary problem.
Unfortunately, you have to either restart Hudson, or force a reload of the configuration. (visit the page http://:/reload to reload the config).
In case you're willing to use GIT (like I do, mirroring the main SVN repo, onto the Hudson/Jenkins server, and it works great)....
..you could try Stephen Haberman's post-receive-hudson:
This hook creates new jobs for each
branch in the Hudson continuous
integration tool. Besides creating the
job if needed, the user who pushed is
added to the job's email list if they
were not already there.
In any case, that script can give you new hints on how to remote control Jenkins(Hudson).