What should be provided as action-name in az-cli command while creating sentinel alert? - azure-cli

I'm trying to create a sentinel alert by using the below az-cli command :
az sentinel alert-rule action create --action-name
--resource-group
--rule-name
--workspace-name
[--etag]
[--logic-app-resource-id]
[--trigger-uri]
Have followed this article but it doesn't have any examples for the command https://learn.microsoft.com/en-us/cli/azure/sentinel/alert-rule/action?view=azure-cli-latest#az-sentinel-alert-rule-action-create
What should be provided as parameter to action-name ?

az sentinel alert-rule action create --action-name
--resource-group
--rule-name
--workspace-name
[--etag]
[--logic-app-resource-id]
[--trigger-uri] ```
Here --eta, --logic-app-resource-id, --trigger-uri paramters are optional where --action-name is the action group name.
For action name, go to the Azure Portal > Monitor > Alerts > Action Groups > create an action group for which resource that alert rules should be created:

Related

SQL Error [91016] [22000]: Remote file 'stage_name/java_udf.jar' was not found

Created a jar file and use it as function. I created it with same user role for both function and snowflake stage. Uploaded the jar file to the stage using snowsql.
When I run the the following command in snowflake ui (browser), it works.
ls #~/stage_name
However, when I use the service account with similar role that I have using DBeaver. It does not work. It comes up empty.
Same thing with the function, it works in the Snowflake UI, but not in DBeaver. Please note that both users have the same role. Also, added grant "all privileges" and "usage" (which be part of all) to the roles I want them to use. But again, it does not work. It shows error below
**> SQL Error [91016] [22000]: Remote file 'stage_name/java_udf.jar' was
not found. If you are running a copy command, please make sure files
are not deleted when they are being loaded or files are not being
loaded into two different tables concurrently with auto purge option.**
However, when I run the function in Snowflake UI using my user account, it works fine. Please note my user account has the same role as the service account. But it doesn't work on the service account. Any ideas?
Followed steps here in the documentation:
https://docs.snowflake.com/en/developer-guide/udf/java/udf-java-creating.html#label-udf-java-in-line-examples
So I think I know the issue.
The stage could be shared using the same role. But the files uploaded in stage are not. They belong to the users who uploaded them. I just loaded exactly the same file a the same internal stage. And they did not overwrite each other:
Service Account:
name: xxxxxxx.jar
size: 389568
md5: be8b59593ae8c4b8baebaa8474bda0a7
last_modified: Tue, 8 Feb 2022 03:26:29 GMT
User account:
namne: xxxxxxx.jar
size: 389568
md5: 0c4d85a3a6581fa3007f0a4113570dbc
last_modified: Mon, 7 Feb 2022 17:03:58 GMT
~# is the USER LOCAL stoage only area.
thus, unless the automation is the "same" user, it will not be able to access it.
This should be provable by getting the same "run" command that works from the WebUI for your user, and logging in as the automation user, and seeing you get the error there.
Reading that link document, full you can see that you should use a table storage, or a named storage, which you can grant access to the role you both have.
working proof:
on user simeon:
create or replace stage my_stage;
create or replace function echo_varchar(x varchar)
returns varchar
language java
called on null input
handler='TestFunc.echo_varchar'
target_path='#my_stage/testfunc.jar'
as
'class TestFunc {
public static String echo_varchar(String x) {
return x;
}
}';
create role my_role;
grant usage on function echo_varchar(varchar) to my_role;
grant all on stage my_stage to my_role;
grant usage on database test to my_role;
grant usage on schema not_test to my_role;
grant usage on warehouse compute_wh to my_role;
then I test it:
use role my_role;
select current_user(), current_role();
/*CURRENT_USER() CURRENT_ROLE()
SIMEON MY_ROLE*/
select test.not_test.echo_varchar('Hello');
/*TEST.NOT_TEST.ECHO_VARCHAR('HELLO')
Hello*/
I created a new user test_two set them to role my_role
on user test_two:
use role my_role;
select current_user(), current_role();
/*CURRENT_USER() CURRENT_ROLE()
TEST_TWO MY_ROLE*/
select test.not_test.echo_varchar('Hello');
/*TEST.NOT_TEST.ECHO_VARCHAR('HELLO')
Hello*/
Ok so a function put on a accessible stage works, lets put another on my user simeon local stage ~#
on user Simeon:
returns varchar
language java
called on null input
handler='TestFuncB.echo_varcharb'
target_path='#~/testfuncb.jar'
as
'class TestFuncB {
public static String echo_varcharb(String x) {
return x;
}
}';
grant usage on function echo_varcharb(varchar) to my_role;
select test.not_test.echo_varcharb('Hello');
/*TEST.NOT_TEST.ECHO_VARCHARB('HELLO')
Hello*/
back on user test_two:
select test.not_test.echo_varcharb('Hello');
/*Remote file 'testfuncb.jar' was not found. If you are running a copy command, please make sure files are not deleted when they are being loaded or files are not being loaded into two different tables concurrently with auto purge option.*/

Azure DevOps user access filtering

We have a requirement in my project where we need to find out the repository access for users in Azure DevOps using cli.
We were able to find out the top-level access for all the users, using this CLI command as provided in the official azure-cli documents.
Command-1
az devops user list --org {Organisation-Name} --query members[].[username,emailid,accesslevel] -o table
The above command returns the following output:
Username EmailId AccessLevel
------------------------------------------
John Doe john.doe#abc.com Basic
Rick Stein rick.stein#abc.com Stakeholder
....
Next using the user's email-id extracted from the list above, we are able to find out the granular level of repository access for each individual user as follows:
Command #2:
az devops user show --org {Organisation Name} --user john.doe#abc.com --query "[Username:user.name,ProjectRepoName:repo.access]"
The corresponding output -
{
"Username": "John Doe",
"ProjectRepoName": [
"Develop.Env1",
"Test.Env3",
"UAT.Env2"
]
}
This activity gives the required data on an individual user level. However, we want the data for all the users that are provided by the user list from command one as mentioned above.
Is there a way in which we can combine both the az devops user list & az devops user show commands in a single command via a script, that would traverse all the users in the user list and for each user, using the show command provide the details of the repo access, that can then be stored as a json/table output?
Note: one approach that we can think of is- to filter out the name/email from the list generated using command-1 and pass that list in the user section of the second command. However, the user section takes only one value at a time so not sure, how can this be achieved using CLI operations.
Any help or suggestions on this is highly appreciated. Thanks in advance.
The resolution of this issue was by using the foreach loop logic and making use of the appropriate format for filtering the output for command one.
The snippet of the working code-
.......
$listOfMails = (az devops user list --org {Organisation-Name} --query members[].emailid -o table)
foreach($email in $listOfEmails)
{
(az devops user show --org {Organisation Name} --user ($email) --query "[Username:user.name,ProjectRepoName:repo.access]")
}
...
This resulted in the successful extraction of data, as per requirement.

AZ CLI query filter on multiple properties using &&

I am trying to create an az cli query that can evaluate if I am logged into the correct tenant and subscription. I know I have to use the ? and && operators but have not been able to get them in the correct combination yet that will work. When I query for just a single value using the line below, works fine:
az account list --query "[?id=='my_subscription_id']" --output json
But when I try either of the lines below, it tells me it is invalid jmespath_type value:
az account list --query "[?id=='my_subscription_id' && ?tenantId=='my_tenant_id']" --output json
az account list --query "[(?id=='my_subscription_id') && (?tenantId=='my_tenant_id')]" --output json
when I try the line below, it gives me the error ] was unexpected at this time:
az account list --query "[(?id=='my_subscription_id')&&(?tenantId=='my_tenant_id')]" --output json
I know this can be done, just can't seem to find the right mixture yet.
UPDATED INFO:
Upon further testing, I made some progress but still not exactly what I was expecting. Assume that the tenant ID is 123, the subscription ID of the sub I am wanting is ABC and my account also has access to the subscription ID EFG. When running the command below:
az account list --query "[].{subscriptionId:id,tenantId:tenantId}"
I get the output:
{
"subscriptionId": "ABC",
"tenantId": "123"
},
{
"subscriptionId": "EFG",
"tenantId": "123"
}
I would expect that running the command below, would return just the single record that matches:
az account list --query "[?id == 'ABC' && tenantid == '123'].{subscriptionId:id,tenantId:tenantId}" --output json
But, it does not. It returns [].
Running the command below returns the single record that matches both conditions:
az account list --query "[?id == 'ABC' || tenantid == '123'].{subscriptionId:id,tenantId:tenantId}" --output json
Based on the documentation, && is an AND, and || is an OR. I would think when running the command line that has the || in it would return BOTH records but it only returns the one that contains both values.
I am trying to create an az cli query that can evaluate if I am logged
into the correct tenant and subscription.
In fact, one subscription can only trust one tenant, so you can just filter the subscription Id, it will get the only one match tenant ID. Read more details in this blog.
A directory is the Azure AD service and each directory may have one or
more domains. An Azure subscription has a trust relationship with
Azure Active Directory which means that the subscription trusts Azure
AD to authenticate users, services, and devices.
A directory can have many subscriptions associated with it, but only
one tenant. Multiple subscriptions can trust the same Azure AD
directory, but each subscription can only trust a single directory.
In this case, you have known the subscription Id. You also got the output of the subscription id and tenant Id mapping records. You can get accurate results by filtering your subscription Id like this. Or use it as you knew it: az account list --query "[?id=='my_subscription_id']" --output json
Then you can verify if you have logged in the correct tenant.
az account list --query "[].{SubID:id,TenantID:tenantId}[?SubID=='my_subscription_id']" -o table
result

I want to execute a report subscription by clicking on a field

I have created a report on the SQL Server Report Service that lists all reports and their corresponding subscriptions.
[report name] [subscription description] [run]
I have added a column called [run] that contains an image. I want to make it so that if the user clicks on the [run] image/cell, it executes the corresponding subscription.
Is this possible?
I've looked at the image action, but it only seems to allow execution of reports (not subscriptions) or URLs.
The workaround I used was to create a separate report called SubscriptionRunner which accepts one parameter (the subscription id) and executes that subscription using the following SQL (provided by Anthony Forloney):
EXEC ReportServer.dbo.AddEvent #EventType='TimedSubscription', #EventData=<SubscriptionID>
I then set the Action property of the cell in the original report to Run the SubscriptionRunner report.
This has the side effect of opening SubscriptionRunner (which I would prefer to avoid) but it will do for now.

Jenkins Promotion Parameter values not coming through in Shell Exec Command

I have created build promotion "choice" parameter named "RUNSCRIPT" with values "No" (as default) and "Yes" and trying to get the value of parameter in EXEC SHELL command as $RUNSCRIPT but neither value "Yes" or "No" coming through. If I look at the output it comes as $RUNSCRIPT (as it is). Why is not being replaced with value. Any suggestions? Also, tried to create other type of parameters e.g. String value but its not working as well.
If you want to pass a value from jenkins to a script you need to define the parameter as jenkins environment variable. I have used Ant for this. For example :
property environment="env"
property name="user" value="${env.user}"
if I just use
property name="user" value="${user}"
the values will be passed from some other file that is referring user.
I managed to get an Approval Parameter to be passed as a Build Parameter on a downstream build (triggered by the promotion itself); you simply need to pass them on.
I learned that Approval parameters are allowed within the approval "build" so to speak, so any actions you have in that approval should be able to reference any of the Approval parameters.
This means you can have an approval parameter FOO, and then in the approval actions, if you have a "Trigger parameterized build" action, you can use a "Predefined parameters" with the text:
BAR=${FOO}
The triggered build will then have the BAR build proprerty set with the value of whatever the build was promoted with.