Creating 20+ Azure Resource Group with Locks two locations in the US (West and East). I can not fine JSON template or cli template which would let me create them through user prompt in the terminal or through JSON parameter in the console. I cant be creating one by one for both the regions using
New-AzureRmResourceGroup -Name $rgName -Location $locName
Closest i saw in MS site is the below -
variables
`$labPrefix = "Mlab"
$labnumber = "2017"
$labsubnet = "55"
$rgName = $labPrefix + $labnumber #New resource group name
$locName = "West Europe" # Loation of new resource group
$saName = $rgName.Replace("-","").tolower()
$saType="Standard_LRS" # Storage account type`
If i was creating RG as Mlab2017 - this would work. but mine would have 4 different labPrefix and 4 different labnumber. I cant seem to find a better solutions for this. any help on creating the json array with or shell script array to pass and create the RG with Locks will be highly appreciated.
You could use template to create resource groups firstly, then you could use Power Shell to lock resource groups in specific area. For example:
$location1 = "eastus"
$location2 = "westus"
$rg=Get-AzureRmResourceGroup |Where-Object{($_.Location -eq $location1) -or ($_.Location -eq $location2)}
$rgnames = $rg.ResourceGroupName
foreach ($rgname in $rgnames)
{
$lockname = $rgname+"lock"
New-AzureRmResourceLock -LockName $lockname -LockLevel CanNotDelete -ResourceGroupName $rgname
}
You also could check this link.
Related
I have three DynamoDB tables. Two tables have instance IDs that are part of an application and the other is a master table of all instances across all of my accounts and the tag metadata. I have two scans for the two tables to get the instance IDs and then query the master table for the tag metadata. However, when I try writing this to the CSV file, I want to have two separate header sections for each dynamo table's unique output. Once the first iteration is done, the second file write writes to the last row where the first iteration left off instead of starting over at the top in the second header section. Below is my code and an output example to make it clear.
CODE:
import boto3
import csv
import json
from boto3.dynamodb.conditions import Key, Attr
dynamo = boto3.client('dynamodb')
dynamodb = boto3.resource('dynamodb')
s3 = boto3.resource('s3')
# Required resource and client calls
all_instances_table = dynamodb.Table('Master')
missing_response = dynamo.scan(TableName='T1')
installed_response = dynamo.scan(TableName='T2')
# Creates CSV DictWriter object and fieldnames
with open('file.csv', 'w') as csvfile:
fieldnames = ['Agent Not Installed', 'Not Installed Account', 'Not Installed Tags', 'Not Installed Environment', " ", 'Agent Installed', 'Installed Account', 'Installed Tags', 'Installed Environment']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
# Find instances IDs from the missing table in the master table to pull tag metadata
for instances in missing_response['Items']:
instance_missing = instances['missing_instances']['S']
#print("Missing:" + instance_missing)
query_missing = all_instances_table.query(KeyConditionExpression=Key('ID').eq(instance_missing))
for item_missing in query_missing['Items']:
missing_id = item_missing['ID']
missing_account = item_missing['Account']
missing_tags = item_missing['Tags']
missing_env = item_missing['Environment']
# Write the data to the CSV file
writer.writerow({'Agent Not Installed': missing_id, 'Not Installed Account': missing_account, 'Not Installed Tags': missing_tags, 'Not Installed Environment': missing_env})
# Find instances IDs from the installed table in the master table to pull tag metadata
for instances in installed_response['Items']:
instance_installed = instances['installed_instances']['S']
#print("Installed:" + instance_installed)
query_installed = all_instances_table.query(KeyConditionExpression=Key('ID').eq(instance_installed))
for item_installed in query_installed['Items']:
installed_id = item_installed['ID']
print(installed_id)
installed_account = item_installed['Account']
installed_tags = item_installed['Tags']
installed_env = item_installed['Environment']
# Write the data to the CSV file
writer.writerow({'Agent Installed': installed_id, 'Installed Account': installed_account, 'Installed Tags': installed_tags, 'Installed Environment': installed_env})
OUTPUT:
This is what the columns/rows look like in the file.
I need all of the output to be on the same line for each header section.
DATA:
Here is a sample of what both tables look like.
SAMPLE OUTPUT:
Here is what the for loops print out and appends to the lists.
Missing:
i-0xxxxxx 333333333 foo#bar.com int
i-0yyyyyy 333333333 foo1#bar.com int
Installed:
i-0zzzzzz 44444444 foo2#bar.com int
i-0aaaaaa 44444444 foo3#bar.com int
You want to collect related rows together into a single list to write on a single row, something like:
missing = [] # collection for missing_responses
installed = [] # collection for installed_responses
# Find instances IDs from the missing table in the master table to pull tag metadata
for instances in missing_response['Items']:
instance_missing = instances['missing_instances']['S']
#print("Missing:" + instance_missing)
query_missing = all_instances_table.query(KeyConditionExpression=Key('ID').eq(instance_missing))
for item_missing in query_missing['Items']:
missing_id = item_missing['ID']
missing_account = item_missing['Account']
missing_tags = item_missing['Tags']
missing_env = item_missing['Environment']
# Update first half of row with missing list
missing.append(missing_id, missing_account, missing_tags, missing_env)
# Find instances IDs from the installed table in the master table to pull tag metadata
for instances in installed_response['Items']:
instance_installed = instances['installed_instances']['S']
#print("Installed:" + instance_installed)
query_installed = all_instances_table.query(KeyConditionExpression=Key('ID').eq(instance_installed))
for item_installed in query_installed['Items']:
installed_id = item_installed['ID']
print(installed_id)
installed_account = item_installed['Account']
installed_tags = item_installed['Tags']
installed_env = item_installed['Environment']
# update second half of row by updating installed list
installed.append(installed_id, installed_account, installed_tags, installed_env)
# combine your two lists outside a loop
this_row = []
i = 0;
for m in missing:
# iterate through the first half to concatenate with the second half
this_row.append( m + installed[i] )
i = i +1
# adding an empty column after the write operation, manually, is optional
# Write the data to the CSV file
writer.writerow(this_row)
This will work if your installed and missing tables operate on a relatable field - like a timestamp or an account ID, something that you can ensure keeps the rows being concatenated in the same order. A data sample would be useful to really answer the question.
I have a number of module invocations that look similar to this
1 module "gcpue4a1" {
2 source = "../../../modules/pods"
3
4 }
where the module is creating instances, DNS records, etc.
locals {
gateway_name = "gateway-${var.network_zone}-${var.environment}-1"
}
resource "google_compute_instance" "gateway" {
name = "${local.gateway_name}"
machine_type = "n1-standard-8"
zone = "${var.zone}"
allow_stopping_for_update = true
}
How can I iterate over a list of all instances that have been created through this module. Can I do it with instance tags or labels?
In the end what I want is to be able to iterate over a list to export to an ansible inventory file. But I'm just not sure how I do this when my resources are encapsulated in modules.
With terraform show I can clearly see the structure of the variables.
➜ gcp-us-east4 git:(integration) ✗ terraform show | grep google_compute_instance.gateway -n1
640- zone = us-east4-a
641:module.screencast-gcp-pod-gcpue4a1-food.google_compute_instance.gateway:
642- id = gateway-gcpue4a1-food-1
--
--
991- zone = us-east4-a
992:module.screencast-gcp-pod-gcpue4a2-food.google_compute_instance.gateway:
993- id = gateway-gcpue4a2-food-1
--
--
1342- zone = us-east4-a
1343:module.screencast-gcp-pod-gcpue4a3-food.google_compute_instance.gateway:
1344- id = gateway-gcpue4a3-food-1
--
--
1693- zone = us-east4-a
1694:module.screencast-gcp-pod-gcpue4a4-food.google_compute_instance.gateway:
1695- id = gateway-gcpue4a4-food-1
The etcd inventory piece works just fine when I explicitly say which node I want. The overall inventory piece below it does not and I'm not sure how to fix it.
10 ##Create ETCD Inventory
11 provisioner "local-exec" {
12 command = "echo \"\n[etcd]\n${google_compute_instance.k8s-master.name} ansible_s sh_host=${google_compute_instance.k8s-master.network_interface.0.address}\" >> kubesp ray-inventory"
13 }
14
15 ##Create Nodes Inventory
16 provisioner "local-exec" {
17 command = "echo \"\n[kube-node]\" >> kubespray-inventory"
18 }
19 # provisioner "local-exec" {
20 # command = "echo \"${join("\n",formatlist("%s ansible_ssh_host=%s", google_compu te_instance.gateway.*.name, google_compute_instance.gateway.*.network_interface.0.add ress))}\" >> kubespray-inventory"
21 # }
➜ gcp-us-east4 git:(integration) ✗ terraform apply
Error: resource 'null_resource.ansible-provision' provisioner local-exec (#4): unknown resource 'google_compute_instance.gateway' referenced in variable google_compute_instance.gateway.*.id
you can make sure each module adds a label that matches the module
and you can then use gcloud compute instances list and use a filter to only show the ones with the specific lablel.
This question already has an answer here:
Is there a way to store Mosquitto payload into an MySQL database for history purpose?
(1 answer)
Closed 4 years ago.
I've connected a device that communicates to my mosquitto MQTT server (RPi) and is sending out publications to a specified topic. What I want to do now is to store the messages published on that topic on the MQTT server into a MySQL database. I know how MySQL works, but I don't know how to listen for these incoming publications. I'm looking for a light-weight solution that runs in the background. Any pointers or ideas on libraries to use are very welcome.
I've done something similar in the last days:
live-collecting weatherstation-data with pywws
publishing with pywws.service.mqtt to mqtt-Broker
python-script on NAS collecting the data and writing to MariaDB
#!/usr/bin/python -u
import mysql.connector as mariadb
import paho.mqtt.client as mqtt
import ssl
mariadb_connection = mariadb.connect(user='USER', password='PW', database='MYDB')
cursor = mariadb_connection.cursor()
# MQTT Settings
MQTT_Broker = "192.XXX.XXX.XXX"
MQTT_Port = 8883
Keep_Alive_Interval = 60
MQTT_Topic = "/weather/pywws/#"
# Subscribe
def on_connect(client, userdata, flags, rc):
mqttc.subscribe(MQTT_Topic, 0)
def on_message(mosq, obj, msg):
# Prepare Data, separate columns and values
msg_clear = msg.payload.translate(None, '{}""').split(", ")
msg_dict = {}
for i in range(0, len(msg_clear)):
msg_dict[msg_clear[i].split(": ")[0]] = msg_clear[i].split(": ")[1]
# Prepare dynamic sql-statement
placeholders = ', '.join(['%s'] * len(msg_dict))
columns = ', '.join(msg_dict.keys())
sql = "INSERT INTO pws ( %s ) VALUES ( %s )" % (columns, placeholders)
# Save Data into DB Table
try:
cursor.execute(sql, msg_dict.values())
except mariadb.Error as error:
print("Error: {}".format(error))
mariadb_connection.commit()
def on_subscribe(mosq, obj, mid, granted_qos):
pass
mqttc = mqtt.Client()
# Assign event callbacks
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_subscribe = on_subscribe
# Connect
mqttc.tls_set(ca_certs="ca.crt", tls_version=ssl.PROTOCOL_TLSv1_2)
mqttc.connect(MQTT_Broker, int(MQTT_Port), int(Keep_Alive_Interval))
# Continue the network loop & close db-connection
mqttc.loop_forever()
mariadb_connection.close()
If you are familiar with Python the Paho MQTT library is simple, light on resources, and interfaces well with Mosquitto. To use it simply subscribe to the topic and set up a callback to pass the payload to MySQL using peewee as shown in this answer. Run the script in the background and call it good!
Here I am trying to use a plugin to check whether the service running or not, if there is any warning or any critical action required, at the same time the performance parameter.
We have used below plugin to check if a server is alive or not and read it's performance data JSON
https://github.com/drewkerrigan/nagios-http-json
I am trying to read a JSON file as below which is hosted on http://localhost:8080/sample.json
The plugin works perfectly on Command line, it shows me all the Metrics available.
$:/usr/lib/nagios/plugins$ ./check_http_json.py -H localhost:8080 -p sample.json -m metrics.etp_count metrics.atc_count
OK: Status OK.|'metrics.etp_count'=101 'metrics.atc_count'=0
But when I try the same in Icinga2 configuration, it doesn't show me this performance metrics, although it doesn't give any error but at the same time it don't show any value.
find the JSON, Command.conf and Service.conf as follows.
{
"metrics": {
"etp_count": "0",
"atc_count": "101",
"mean_time": -1.0,
}
}
Below are my commands.conf and services.conf
commands.conf
/* Json Read Command */
object CheckCommand "json_check"{
import "plugin-check-command"
command = [PluginDir + "/check_http_json.py"]
arguments = {
"-H" = "$server_port$"
"-p" = "$json_path$"
"-w" = "$warning_value$"
"-c" = "$critical_value$"
"-m" = "$Metrics1$,$Metrics2$"
}
}
services.conf
apply Service "json"{
import "generic-service"
check_command = "json_check"
vars.server_port="localhost:8080"
vars.json_path="sample.json"
vars.warning_value="metrics.etp_count,1:100"
vars.critical_value="metrics.etp_count,101:1000"
vars.Metrics1="metrics.etp_count"
vars.Metrics2="metrics.atc_count"
assign where host.name == NodeName
}
Does any one have any idea how can we pass multiple values in Command.conf and Service.conf??
I have resolved the issue.
I had to change the Plugin file "check_http_json.py" for below code
def checkMetrics(self):
"""Return a Nagios specific performance metrics string given keys and parameter definitions"""
metrics = ''
warning = ''
critical = ''
if self.rules.metric_list != None:
for metric in self.rules.metric_list:
Replaced With
def checkMetrics(self):
"""Return a Nagios specific performance metrics string given keys and parameter definitions"""
metrics = ''
warning = ''
critical = ''
if self.rules.metric_list != None:
for metric in self.rules.metric_list[0].split():
Actually the issue was the list was not handled properly, so it was not able to iterate through the items in the list, it was considering it as a single string due to services.config file.
it had to be further get split to get the items in the Metrics string.
I have set up Jenkins, but I would like to find out what files were added/changed between the current build and the previous build. I'd like to run some long running tests depending on whether or not certain parts of the source tree were changed.
Having scoured the Internet I can find no mention of this ability within Hudson/Jenkins though suggestions were made to use SVN post-commit hooks. Maybe it's so simple that everyone (except me) knows how to do it!
Is this possible?
I have done it the following way. I am not sure if that is the right way, but it seems to be working. You need to get the Jenkins Groovy plugin installed and do the following script.
import hudson.model.*;
import hudson.util.*;
import hudson.scm.*;
import hudson.plugins.accurev.*
def thr = Thread.currentThread();
def build = thr?.executable;
def changeSet= build.getChangeSet();
changeSet.getItems();
ChangeSet.getItems() gives you the changes. Since I use accurev, I did List<AccurevTransaction> accurevTransList = changeSet.getItems();.
Here, the modified list contains duplicate files/names if it has been committed more than once during the current build window.
The CI server will show you the list of changes, if you are polling for changes and using SVN update. However, you seem to want to be changing the behaviour of the build depending on which files were modified. I don't think there is any out-of-the-box way to do that with Jenkins alone.
A post-commit hook is a reasonable idea. You could parameterize the job, and have your hook script launch the build with the parameter value set according to the changes committed. I'm not sure how difficult that might be for you.
However, you may want to consider splitting this into two separate jobs - one that runs on every commit, and a separate one for the long-running tests that you don't always need. Personally I prefer to keep job behaviour consistent between executions. Otherwise traceability suffers.
echo $SVN_REVISION
svn_last_successful_build_revision=`curl $JOB_URL'lastSuccessfulBuild/api/json' | python -c 'import json,sys;obj=json.loads(sys.stdin.read());print obj["'"changeSet"'"]["'"revisions"'"][0]["'"revision"'"]'`
diff=`svn di -r$SVN_REVISION:$svn_last_successful_build_revision --summarize`
You can use the Jenkins Remote Access API to get a machine-readable description of the current build, including its full change set. The subtlety here is that if you have a 'quiet period' configured, Jenkins may batch multiple commits to the same repository into a single build, so relying on a single revision number is a bit naive.
I like to keep my Subversion post-commit hooks relatively simple and hand things off to the CI server. To do this, I use wget to trigger the build, something like this...
/usr/bin/wget --output-document "-" --timeout=2 \
https://ci.example.com/jenkins/job/JOBID/build?token=MYTOKEN
The job is then configured on the Jenkins side to execute a Python script that leverages the BUILD_URL environment variable and constructs the URL for the API from that. The URL ends up looking like this:
https://ci.example.com/jenkins/job/JOBID/BUILDID/api/json/
Here's some sample Python code that could be run inside the shell script. I've left out any error handling or HTTP authentication stuff to keep things readable here.
import os
import json
import urllib2
# Make the URL
build_url = os.environ['BUILD_URL']
api = build_url + 'api/json/'
# Call the Jenkins server and figured out what changed
f = urllib2.urlopen(api)
build = json.loads(f.read())
change_set = build['changeSet']
items = change_set['items']
touched = []
for item in items:
touched += item['affectedPaths']
Using the Build Flow plugin and Git:
final changeSet = build.getChangeSet()
final changeSetIterator = changeSet.iterator()
while (changeSetIterator.hasNext()) {
final gitChangeSet = changeSetIterator.next()
for (final path : gitChangeSet.getPaths()) {
println path.getPath()
}
}
With Jenkins pipelines (pipeline supporting APIs plugin 2.2 or above), this solution is working for me:
def changeLogSets = currentBuild.changeSets
for (int i = 0; i < changeLogSets.size(); i++) {
def entries = changeLogSets[i].items
for (int j = 0; j < entries.length; j++) {
def entry = entries[j]
def files = new ArrayList(entry.affectedFiles)
for (int k = 0; k < files.size(); k++) {
def file = files[k]
println file.path
}
}
}
See How to access changelogs in a pipeline job.
Through Groovy:
<!-- CHANGE SET -->
<% changeSet = build.changeSet
if (changeSet != null) {
hadChanges = false %>
<h2>Changes</h2>
<ul>
<% changeSet.each { cs ->
hadChanges = true
aUser = cs.author %>
<li>Commit <b>${cs.revision}</b> by <b><%= aUser != null ? aUser.displayName : it.author.displayName %>:</b> (${cs.msg})
<ul>
<% cs.affectedFiles.each { %>
<li class="change-${it.editType.name}"><b>${it.editType.name}</b>: ${it.path} </li> <% } %> </ul> </li> <% }
if (!hadChanges) { %>
<li>No Changes !!</li>
<% } %> </ul> <% } %>
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
It is slightly different - I needed a script for Git on a particular folder...
So, I wrote a check based on jollychang.
It can be added directly to the job's exec shell script. If no files are detected it will exit 0, i.e. SUCCESS... this way you can always trigger on check-ins to the repository, but build when files in the folder of interest change.
But... If you wanted to build on-demand (i.e. clicking Build Now) with the changed from the last build.. you would change _affected_files to:
_affected_files=`curl --silent $JOB_URL'lastSuccessfulBuild/api/json' | python -c "$python_func"`
Note: You have to use Jenkins' own SVN client to get a change list. Doing it through a shell build step won't list the changes in the build.
It's simple, but this works for me:
$DirectoryA = "D:\Jenkins\jobs\projectName\builds" ####Jenkind directory
$firstfolder = Get-ChildItem -Path $DirectoryA | Where-Object {$_.PSIsContainer} | Sort-Object LastWriteTime -Descending | Select-Object -First 1
$DirectoryB = $DirectoryA + "\" + $firstfolder
$sVnLoGfIle = $DirectoryB + "\" + "changelog.xml"
write-host $sVnLoGfIle
I tried to add that to comments but code in comments is no way:
Just want to prettify code from heroin's answer:
def changedFiles = []
def changeLogSets = currentBuild.changeSets
for (entries in changeLogSets) {
for (entry in entries) {
for (file in entry.affectedFiles) {
echo "Found changed file: ${file.path}"
changedFiles += "${file.path}"
}
}
}
Keep in mind for some cases git plugin returns empty changeSet, like:
First run in newly created branch
'Build now' button build
Refer to https://issues.jenkins-ci.org/browse/JENKINS-26354 for more details.