Importing JSON file for custom role in Azure AD - json

Trying to import a JSON file for creating an Azure custom role with the following content
{
"Name": "Veeam Backup for Microsoft Azure v4 Service Account Permissions",
"Id:": null
"IsCustom": true,
"Description": "Permissions needed to function Veeam Backup for Microsoft Azure v4",
"Actions": [
"Microsoft.Authorization/roleAssignments/read",
"Microsoft.Authorization/*/Write",
"Microsoft.Commerce/RateCard/read",
"Microsoft.Compute/disks/beginGetAccess/action",
"Microsoft.Compute/disks/delete",
"Microsoft.Compute/disks/endGetAccess/action",
"Microsoft.Compute/disks/read",
"Microsoft.Compute/disks/write",
"Microsoft.Compute/snapshots/beginGetAccess/action",
"Microsoft.Compute/snapshots/delete",
"Microsoft.Compute/snapshots/endGetAccess/action",
"Microsoft.Compute/snapshots/read",
"Microsoft.Compute/snapshots/write",
"Microsoft.Compute/virtualMachines/deallocate/action",
"Microsoft.Compute/virtualMachines/delete",
"Microsoft.Compute/virtualMachines/extensions/read",
"Microsoft.Compute/virtualMachines/extensions/write",
"Microsoft.Compute/virtualMachines/read",
"Microsoft.Compute/virtualMachines/runCommand/action",
"Microsoft.Compute/virtualMachines/start/action",
"Microsoft.Compute/virtualMachines/write",
"Microsoft.DevTestLab/Schedules/write",
"Microsoft.Network/networkInterfaces/delete",
"Microsoft.Network/networkInterfaces/join/action",
"Microsoft.Network/networkInterfaces/read",
"Microsoft.Network/networkInterfaces/write",
"Microsoft.Network/networkSecurityGroups/join/action",
"Microsoft.Network/networkSecurityGroups/read",
"Microsoft.Network/publicIPAddresses/join/action",
"Microsoft.Network/publicIPAddresses/read",
"Microsoft.Network/publicIPAddresses/delete",
"Microsoft.Network/publicIPAddresses/write",
"Microsoft.Network/virtualNetworks/read",
"Microsoft.Network/virtualNetworks/subnets/join/action",
"Microsoft.Network/virtualNetworks/write",
"Microsoft.Resources/subscriptions/resourceGroups/moveResources/action",
"Microsoft.Resources/subscriptions/resourceGroups/delete",
"Microsoft.Resources/subscriptions/resourceGroups/read",
"Microsoft.Resources/subscriptions/resourceGroups/write",
"Microsoft.ServiceBus/namespaces/queues/authorizationRules/ListKeys/action",
"Microsoft.ServiceBus/namespaces/queues/authorizationRules/read",
"Microsoft.ServiceBus/namespaces/queues/authorizationRules/write",
"Microsoft.ServiceBus/namespaces/queues/delete",
"Microsoft.ServiceBus/namespaces/queues/read",
"Microsoft.ServiceBus/namespaces/queues/write",
"Microsoft.ServiceBus/namespaces/read",
"Microsoft.ServiceBus/namespaces/write",
"Microsoft.ServiceBus/register/action",
"Microsoft.Sql/locations/*",
"Microsoft.Sql/managedInstances/databases/delete",
"Microsoft.Sql/managedInstances/databases/read",
"Microsoft.Sql/managedInstances/databases/write",
"Microsoft.Sql/managedInstances/encryptionProtector/read",
"Microsoft.Sql/managedInstances/read",
"Microsoft.Sql/servers/databases/azureAsyncOperation/read",
"Microsoft.Sql/servers/databases/read",
"Microsoft.Sql/servers/databases/transparentDataEncryption/read",
"Microsoft.Sql/servers/databases/usages/read",
"Microsoft.Sql/servers/databases/write",
"Microsoft.Sql/servers/databases/delete",
"Microsoft.Sql/servers/elasticPools/read",
"Microsoft.Sql/servers/read",
"Microsoft.Storage/storageAccounts/blobServices/read",
"Microsoft.Storage/storageAccounts/listKeys/action",
"Microsoft.Storage/storageAccounts/managementPolicies/write",
"Microsoft.Storage/storageAccounts/read",
"Microsoft.Storage/storageAccounts/write",
"Microsoft.Authorization/roleDefinitions/write",
"Microsoft.Sql/servers/encryptionProtector/read",
"Microsoft.Compute/diskEncryptionSets/read",
"Microsoft.KeyVault/vaults/read",
"Microsoft.KeyVault/vaults/keys/versions/read",
"Microsoft.KeyVault/vaults/deploy/action",
"Microsoft.Sql/servers/databases/syncGroups/read"
],
"NotActions": [],
"DataActions": [
"Microsoft.KeyVault/vaults/keys/read",
"Microsoft.KeyVault/vaults/keys/encrypt/action",
"Microsoft.KeyVault/vaults/keys/decrypt/action"
],
"NotDataActions": [],
"AssignableScopes": [
"/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
]
}
results in the following error message:
Operation returned an invalid status code 'Unauthorized'
For import I used this command:
New-AzRoleDefinition -InputFile "/home/christian/VBAZv4_CustomRole.json"
I'm quite sure that I have sufficient permission to import.
Any idea, why I'm not able to import the JSON file.

To Create and assign a custom role in Azure Active Directory you require the following:
Azure AD Premium P1 or P2 license
Privileged Role Administrator or Global Administrator
AzureADPreview module when using PowerShell
Admin consent when using Graph explorer for Microsoft Graph API
Source: https://learn.microsoft.com/en-us/azure/active-directory/roles/custom-create#prerequisites
Hope this helps!

Related

AWS CLI: Error parsing parameter '--config-rule': Invalid JSON:

cat <<EOF > S3ProhibitPublicReadAccess.json
{
"ConfigRuleName": "S3PublicReadProhibited",
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
"Scope": {
"ComplianceResourceTypes": [
"AWS::S3::Bucket"
]
},
"Source": {
"Owner": "AWS",
"SourceIdentifier": "S3_BUCKET_PUBLIC_READ_PROHIBITED"
}
}
EOF
aws configservice put-config-rule --config-rule file://S3ProhibitPublicReadAccess.json
When I go upload my config rule after configuring it gives me the error below of Error parsing parameter '--config-rule': Invalid JSON: Invalid control character at: line 3 column 87 (char 132) JSON received: I first tried this on Windows Powershell to start but then went to try on Linux to see if I would get a different result but am still getting the same error on both machines.
Error:
Error parsing parameter '--config-rule': Invalid JSON: Invalid control character at: line 3 column 87 (char 132)
JSON received: {
"ConfigRuleName": "S3PublicReadProhibited",
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
"Scope": {
"ComplianceResourceTypes": [
"AWS::S3::Bucket"
]
},
"Source": {
"Owner": "AWS",
"SourceIdentifier": "S3_BUCKET_PUBLIC_READ_PROHIBITED"
}
}
The answer is right there, this is how i read the error message...
Invalid JSON: Invalid control character at: line 3 column 87 (char 132)
"Invalid control character" - ie characters like new-lines and line-feeds - ie invisible "control" characters.
"line 3 column 87" - tells you where it thinks the error is (this is not always totally accurate, but its normally close to the error). In this case line 3 column 87 is the end of the below line:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
"char 132" - this is the ASCII code for the character (its the " character btw) which is what it was expecting to find at the end of the line.
So, what does all the mean, basically it was expecting a " and it found a line ending control character instead.
The fix is to make the description key and value into a single line, so:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3
bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
becomes:
"Description": "Checks that your S3 buckets do not allow public read access. If an S3 bucket policy or bucket ACL allows public read access, the bucket is noncompliant.",
I used https://jsonlint.com/ to quickly validate the JSON, and i was able to tweak it and re-validate it until it was correct.

How can I use control-A character as csvDelimiter in AWS-DMS Target S3 Endpoint?

I am extracting the data from Aurora to S3 using AWS DMS, and would like to use csvDelimiter of my choice, which is ^A (i.e. control-A, octal representation \001) while loading data to S3. How do I do that?. By default when S3 is used as target for DMS, it uses "," as default delimiter
compressionType=NONE;csvDelimiter=,;csvRowDelimiter=\n;
But I want to use something as below
compressionType=NONE;csvDelimiter='\001';csvRowDelimiter=\n;
But it prints the delimiter as a text in the output:
I'\001'12345'\001'Abc'
I am using AWS DMS Console to set the Target Endpoint
I tried below delimiters but did not work:
\\001
\u0001
'\u0001'
\u01
\001
Actual Result:
I'\001'12345'\001'Abc'
Expected Result:
I^A12345^AAbc
Here is what I did to resolve this:
I used aws command line to set this delimiter in my target s3 endpoint.
https://docs.aws.amazon.com/translate/latest/dg/setup-awscli.html
aws cli command:
aws dms modify-endpoint --endpoint-arn arn:aws:dms:us-west-2:000001111222:endpoint:OXXXXXXXXXXXXXXXXXXXX4 --endpoint-identifier dms-ep-tgt-s3-abc --endpoint-type target --engine-name s3 --extra-connection-attributes "bucketFolder=data/folderx;bucketname=bkt-xyz;CsvRowDelimiter=^D;CompressionType=NONE;CsvDelimiter=^A;" --service-access-role-arn arn:aws:iam::000001111222:role/XYZ-Datalake-DMS-Role --s3-settings ServiceAccessRoleArn=arn:aws:iam::000001111222:role/XYZ-Datalake-DMS-Role,BucketName=bkt-xyz,CompressionType=NONE
Output:
{
"Endpoint": {
"Status": "active",
"S3Settings": {
"CompressionType": "NONE",
"EnableStatistics": true,
"BucketFolder": "data/folderx",
"CsvRowDelimiter": "\u0004",
"CsvDelimiter": "\u0001",
"ServiceAccessRoleArn": "arn:aws:iam::000001111222:role/XYZ-Datalake-DMS-Role",
"BucketName": "bkt-xyz"
},
"EndpointType": "TARGET",
"ServiceAccessRoleArn": "arn:aws:iam::000001111222:role/XYZ-Datalake-DMS-Role",
"SslMode": "none",
"EndpointArn": "arn:aws:dms:us-west-2:000001111222:endpoint:OXXXXXXXXXXXXXXXXXXXX4",
"ExtraConnectionAttributes": "bucketFolder=data/folderx;bucketname=bkt-xyz;CompressionType=NONE;CsvDelimiter=\u0001;CsvRowDelimiter=\u0004;",
"EngineDisplayName": "Amazon S3",
"EngineName": "s3",
"EndpointIdentifier": "dms-ep-tgt-s3-abc"
}
}
Note: After you run the aws cli command, the DMS console will not show you the delimiter in the endpoint, (not visible as it is a special character). But once you run the task it appears in the data in your s3 files.

Trying to trigger a SSM:Run Command action when my Cloudwatch alarm enters "ALARM" state

Trying to trigger an SSM:Run Command action when my cloudwatch alarm enters the "ALARM" state. I am trying to achieve this with Cloudwatch Rule - Event pattern and by fetching the AWS Cloud Trail API Logs.
Tried with Monitoring and event name as "DescribeAlarms" and stateValue as "ALARM". Just tried adding my SNS topic (instead of SSM:RunCommand) to verify it triggers an email to me when this enters to ALARM state, but no luck.
```{
"source": [
"aws.monitoring"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"monitoring.amazonaws.com"
],
"eventName": [
"DescribeAlarms"
],
"requestParameters": {
"stateValue": [
"ALARM"
]
}
}
}```
I am expecting when this condition is met, here - any alarm that goes into ALARM state should hit the Target - which is my SNS topic.
UPDATE:
Thanks #John for the clarification. As you suggested, I am trying to go with SNS-> Lambda->SSM Run Command. But I am not able to fetch the instance ID from SNS event. It says [Records - Keyerror]. Read some of your posts and tried all. But not able to get through. Could you please help?
Received event: {
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:aws:sns:eu-west-1:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"Sns": {
"Type": "Notification",
"MessageId": "********************c",
"TopicArn": "arn:aws:sns:eu-west-1:*******************************",
"Subject": "ALARM: \"!!! Critical Alert !!! Disk Space is going to be full in Automation Testing\" in EU (Ireland)",
"Message": "{\"AlarmName\":\"!!! Critical Alert !!! Disk Space is going to be full in Automation Testing\",\"AlarmDescription\":\"Disk Space is going to be full in Automation Testing\",\"AWSAccountId\":\"***********\",\"NewStateValue\":\"ALARM\",\"NewStateReason\":\"Threshold Crossed: 1 out of the last 1 datapoints [**********] was less than or equal to the threshold (70.0) (minimum 1 datapoint for OK -> ALARM transition).\",\"StateChangeTime\":\"******************\",\"Region\":\"EU (Ireland)\",\"OldStateValue\":\"OK\",\"Trigger\":{\"MetricName\":\"disk_used_percent\",\"Namespace\":\"CWAgent\",\"StatisticType\":\"Statistic\",\"Statistic\":\"AVERAGE\",\"Unit\":null,\"Dimensions\":[{\"value\":\"/\",\"name\":\"path\"},{\"value\":\"i-****************\",\"name\":\"InstanceId\"},{\"value\":\"ami-****************\",\"name\":\"ImageId\"},{\"value\":\"t2.micro\",\"name\":\"InstanceType\"},{\"value\":\"xvda1\",\"name\":\"device\"},{\"value\":\"xfs\",\"name\":\"fstype\"}],\"Period\":300,\"EvaluationPeriods\":1,\"ComparisonOperator\":\"LessThanOrEqualToThreshold\",\"Threshold\":70.0,\"TreatMissingData\":\"- TreatMissingData: missing\",\"EvaluateLowSampleCountPercentile\":\"\"}}",
"Timestamp": "2019-06-29T19:23:43.829Z",
"SignatureVersion": "1",
"Signature": "XXXXXXXXXXXX",
"SigningCertUrl": "https://sns.eu-west-1.amazonaws.com/XXXXXXXX.pem",
"UnsubscribeUrl": "https://sns.eu-west-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:eu-west-1XXXXXXXXXXXXXXXXXXXXX",
"MessageAttributes":
{}
}
}
]
}
Below is my Lambda function:
from __future__ import print_function
import boto3
import json
ssm = boto3.client('ssm')
ec2 = boto3.resource('ec2')
print('Loading function')
def lambda_handler(event, context):
# Dump the event to the log, for debugging purposes
print("Received event: " + json.dumps(event, indent=2))
message = event['Records']['Sns']['Message']
msg = json.loads(message)
InstanceId = msg['InstanceId']['value']
print ("Instance: %s" % InstanceId)
This probably won't work because AWS CloudTrail only captures API calls to AWS and the movement of a CloudWatch alarm into the ALARM state is an internal change that is not caused by an API call.
I would recommend:
Amazon CloudWatch alarm triggers an AWS Lambda function
The Lambda function calls the SSM Run Command (eg send_command())
Able to achieve with below changes:
from __future__ import print_function
import boto3
import json
ssm = boto3.client('ssm')
ec2 = boto3.resource('ec2')
print('Loading function')
def lambda_handler(event, context):
# Dump the event to the log, for debugging purposes
print("Received event: " + json.dumps(event, indent=2))
message = json.loads(event['Records'][0]['Sns']['Message'])
instance_id = message['Trigger']['Dimensions'][1]['value']
print ("Instance: %s" % instance_id)

How to define config file variables?

I have a configuration file with:
{path, "/mnt/test/"}.
{name, "Joe"}.
The path and the name could be changed by a user. As I know, there is a way to save those variables in a module by usage of file:consult/1 in
-define(VARIABLE, <parsing of the config file>).
Are there any better ways to read a config file when the module begins to work without making a parsing function in -define? (As I know, according to Erlang developers, it's not the best way to make a complicated functions in -define)
If you need to store config only when you start the application - you may use application config file which is defined in 'rebar.config'
{profiles, [
{local,
[{relx, [
{dev_mode, false},
{include_erts, true},
{include_src, false},
{vm_args, "config/local/vm.args"}]
{sys_config, "config/local/yourapplication.config"}]
}]
}
]}.
more info about this here: rebar3 configuration
next step to create yourapplication.config - store it in your application folder /app/config/local/yourapplication.config
this configuration should have structure like this example
[
{
yourapplicationname, [
{path, "/mnt/test/"},
{name, "Joe"}
]
}
].
so when your application is started
you can get the whole config data with
{ok, "/mnt/test/"} = application:get_env(yourapplicationname, path)
{ok, "Joe"} = application:get_env(yourapplicationname, name)
and now you may -define this variables like:
-define(VARIABLE,
case application:get_env(yourapplicationname, path) of
{ok, Data} -> Data
_ -> undefined
end
).

Generate fake CSV to test with rspec

I want to test my method which import a CSV file.
But I don't know how to generate fake CSV files to test it.
I tried a lot of solution I already found on stack but it's not working in my case.
Here is the csv original file :
firstname,lastname,home_phone_number,mobile_phone_number,email,address
orsay,dup,0154862548,0658965848,orsay.dup#gmail.com,2 rue du pré paris
richard,planc,0145878596,0625147895,richard.planc#gmail.com,45 avenue du general leclerc
person.rb
def self.import_data(file)
filename = File.join Rails.root, file
CSV.foreach(filename, headers: true, col_sep: ',') do |row|
firstname, lastname, home_phone_number, mobile_phone_number, email, address = row
person = Person.find_or_create_by(firstname: row["firstname"], lastname: row['lastname'], address: row['address'] )
if person.is_former_email?(row['email']) != true
person.update_attributes({firstname: row['firstname'], lastname: row['lastname'], home_phone_number: row['home_phone_number'], mobile_phone_number: row['mobile_phone_number'], address: row['address'], email: row['email']})
end
end
end
person_spec.rb :
require "rails_helper"
RSpec.describe Person, :type => :model do
describe "CSV file is valid" do
file = #fake file
it "should read in the csv" do
end
it "should have result" do
end
end
describe "import valid data" do
valid_data_file = #fake file
it "save new people" do
Person.delete_all
expect { Person.import_data(valid_data_file)}.to change{ Person.count }.by(2)
expect(Person.find_by(lastname: 'dup').email).to eq "orsay.dup#gmail.com"
end
it "update with new email" do
end
end
describe "import invalid data" do
invalid_data_file = #fake file
it "should not update with former email" do
end
it "should not import twice from CSV" do
end
end
end
I successfully used the Faked CSV Gem from https://github.com/jiananlu/faked_csv to achieve your purpose of generating a CSV File with fake data.
Follow these steps to use it:
Open your command line (i.e. on OSX open Spotlight with CMD+Space, and enter "Terminal")
Install Faked CSV Gem by running command gem install faked_csv. Note: If using a Ruby on Rails project add gem 'faked_csv' to your Gemfile, and then run bundle install
Validate Faked CSV Gem installed successfully by typing in Bash Terminal faked_csv --version
Create a Configuration File for the Faked CSV Gem and where you define how to generate fake data. For example, the below will generate a CSV file with 200 rows (or edit to as many as you wish) and contain comma separated columns for each field. If the value of field type is prefixed with faker: then refer to the "Usage" section of the Faker Gem https://github.com/stympy/faker for examples.
my_faked_config.csv.json
{
"rows": 200,
"fields": [
{
"name": "firstname",
"type": "faker:name:first_name",
"inject": ["luke", "dup", "planc"]
},
{
"name": "lastname",
"type": "faker:name:last_name",
"inject": ["schoen", "orsay", "richard"]
},
{
"name": "home_phone_number",
"type": "rand:int",
"range": [1000000000, 9999999999]
},
{
"name": "mobile_phone_number",
"type": "rand:int",
"range": [1000000000, 9999999999]
},
{
"name": "email",
"type": "faker:internet:email"
},
{
"name": "address",
"type": "faker:address:street_address",
"rotate": 200
}
]
}
Run the following command to use the configuration file my_faked_config.csv.json to generate a CSV file in the current folder named my_faked_data.csv, which contains the fake data faked_csv -i my_faked_config.csv.json -o my_faked_data.csv
Since the generated file may not include the associated Label for each column after generation, simply manually insert the following line at the top of my_faked_data.csv firstname,lastname,home_phone_number,mobile_phone_number,email,address
Review the final contents of the my_faked_data.csv CSV file containing the fake data, which should appear similar to the following:
my_faked_data.csv
firstname,lastname,home_phone_number,mobile_phone_number,email,address
Kyler,Eichmann,8120675609,7804878030,norene#bergnaum.io,56006 Fadel Mission
Hanna,Barton,9424088332,8720530995,anabel#moengoyette.name,874 Leannon Ways
Mortimer,Stokes,5645028548,9662617821,moses#kihnlegros.org,566 Wilderman Falls
Camden,Langworth,2622619338,1951547890,vincenza#gaylordkemmer.info,823 Esmeralda Pike
Nikolas,Hessel,5476149226,1051193757,jonathon#ziemannnitzsche.name,276 Reinger Parks
...
Modify your person_spec.rb Unit Test using the technique shown below, which passes in Mock data to test functionality of the import_data function of your person.rb file
person_spec.rb
require 'rails_helper'
RSpec.describe Person, type: :model do
describe 'Class' do
subject { Person }
it { should respond_to(:import_data) }
let(:data) { "firstname,lastname,home_phone_number,mobile_phone_number,email,address\r1,Kyler,Eichmann,8120675609,7804878030,norene#bergnaum.io,56006 Fadel Mission" }
describe "#import_data" do
it "save new people" do
File.stub(:open).with("filename", {:universal_newline=>false, :headers=>true}) {
StringIO.new(data)
}
Product.import("filename")
expect(Product.find_by(firstname: 'Kyler').mobile_phone_number).to eq 7804878030
end
end
end
end
Note: I used it myself to generate a large CSV file with meaningful fake data for my Ruby on Rails CSV app. My app allows a user to upload a CSV file containing specific column names and persist it to a PostgreSQL database and it then displays the data in a Paginated table view with the ability to Search and Sort using AJAX.
Use openoffice or excel and save the file out as a .csv file in the save options. A spreadsheet progam.