How to use --attach-data-disks when creating new VM using Azure CLI2? - azure-cli

I'm trying to create a new VM using existing Managed disks and I keep running into problems because the parameters are not very well documented.
One problem that I haven't figured out is the format of --attach-data-disks
From the name and description of the parameter this seems to be the way you can attach data disks to the VM that you are creating and I am assuming because it is --attach-data-disks and not --attach-data-disk that you can attach multiple disks using this parameter.
What I don't know is what format to use when passing multiple disks. I have tried separating them using commas but the error that I got seemed to indicate that it viewed the comma delimited list of drives as one long name for a drive.
Here is an example of what I am trying to do:
az vm create -g test-group -n testvm2 --os-type windows --attach-os-disk testvm1-osdisk-20181213-033052 --attach-data-disks "testvm1-datadisk-000-20181213-033052,testvm1-datadisk-001-20181213-033052,testvm1-datadisk-002-20181213-033052"
Error I'm getting:
Deployment failed. Correlation ID: 9999. {
"error": {
"code": "InvalidParameter",
"message": "Id /subscriptions/99999999/resourceGroups/lbacompensafe/providers/Microsoft.Compute/disks/testvm1-datadisk-000-20181213-033052,testvm1-datadisk-001-20181213-033052,testvm1-datadisk-002-20181213-033052 is not a valid resource reference.",
"target": "dataDisk.managedDisk.id"
}
}
I'm running the commands from Powershell, not Bash, if that makes a difference.

Figured it out. It is in fact a space delimited list. I didn't try this sooner because I incorrectly assumeed it would need some sort of grouping or it would look like different parameters, but just listing them out like
--attach-data-disks disk1 disk2 disk3
Will add them in that order. Wish the docs would have just said so. Would have saved me a bunch of time.

Related

What is a useful Azure IoT Hub JSON message structure for consumption in Time Series Insights

The title sounds quite comprehensive, but my baseline question is quite simple, I guess.
Context
I Azure, I have an IoT hub, which I am sending messages to. I use a modified version one of the samples from the Azure Iot SDK for python.
Sending works fine. However, instead of a string, I send a JSON structure.
When I watch the events flowing into the IoT hub, using the Cloud shell, it looks like this:
PS /home/marcel> az iot hub monitor-events --hub-name weathertestiothub
This extension 'azure-cli-iot-ext' is deprecated and scheduled for removal. Please remove and add 'azure-iot' instead.
Starting event monitor, use ctrl-c to stop...
{
"event": {
"origin": "raspberrypi-zero-wh",
"payload": "{ \"timestamp\": \"1608643863720\", \"locationDescription\": \"Attic\", \"temperature\": \"21.941\", \"relhumidity\": \"71.602\" }"
}
}
Issue
The data seems fine, except the payload looks strange here. BUT, the payload is literally what I send from the device, using the SDK sample.
Is this the correct way to do it? At the end, I have a very hard time to actually get the data into the Time Series Insights model. So I guess, my structure is to be blamed.
Question
What is a recommended JSON data structure to send to the IoT hub for later use?
You should add the following 2 lines to your message in your python SDK sample:
msg.content_encoding = "utf-8"
msg.content_type = "application/json"
This should resolve your formatting concern.
We've also updated our samples to reflect this: https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/sync-samples/send_message.py
I ended up using the tip by #elhorton, but it was not the key change. Nonetheless, the formatting in the Azure Shell Monitor looks now much better:
"event": {
"origin": "raspberrypi-zero-wh",
"payload": {
"temperature": 21.543947753906245,
"humidity": 69.22964477539062,
"locationDescription": "Attic"
}
}
The key was:
include the message source time in ISO format
from datetime import datetime
timestampIso = datetime.now().isoformat()
message.custom_properties["iothub-creation-time-utc"] = timestampIso
Using the locationDescription as the Time Series ID Property See https://learn.microsoft.com/en-us/azure/time-series-insights/how-to-select-tsid (Maybe I could also have taken the iothub-connection-device-id, but I did not test that alone specifically)
I guess using "iothub-connection-device-id" will make "raspberrypi-zero-wh" as the name of the time series instance. I agree with your choice of using "locationDescription" as TSID; so Attic becomes the time series instance name, temperature and humidity will be your variables.

Data Studio Connector can't get access token for BigQuery Service Account: Access not granted or expired

I'm trying to make a community connector to connect my database in BigQuery to data studio with the service account that I hooked up as the Owner/DataViewer/JobUser of the BigQuery project. I know that the service account works when connecting to BigQuery because I've tested it elsewhere. I copied from the connector code from this tutorial (https://developers.google.com/datastudio/solution/blocks/using-service-accounts) almost exactly, replacing the SQL string with my query and adding some different query parameters. I also stored the service account's credentials in my script properties by pasting the json object and storing it like:
var service_account_creds_obj = {
"type": "service_account",
"project_id": ...
...
}
scriptProperties.setProperty('SERVICE_ACCOUNT_CREDS', JSON.stringify(service_account_creds_obj));
However, I always get stuck in the flow when my getData function calls getOauthService().getAccessToken(), which doesn't ever successfully return. When I create a report using the connector, I get this error: "Access not granted or expired." I can't find the documentation for getAccessToken and I'm having trouble understanding why it won't terminate. I can see that it doesn't return because a console.log immediately before that line displays but it never gets to the log on the next line. Then my try-catch block catches the error that I'm seeing. Note that my getOauthService function is exactly the same as the one from the documentation/tutorial example, except that I've played around with the input text in the call to createService. That input text shouldn't really matter though right?
Please, I've been trying to debug this for hours, but the documentation on this is pretty horrible, and it's really hard to debug since the flow of the code is handled in the background and stackdriver logging is really buggy.
I figured out my problem. The documentation posted above said to set the OAuth2 scope to https://www.googleapis.com/auth/bigquery.readonly. However, I naively included
"oauthScopes": ["https://www.googleapis.com/auth/bigquery.readonly"]
in my manifest file. Meanwhile, the code I copied over from the documentation already included this line:
.setScope(['https://www.googleapis.com/auth/bigquery.readonly']);
So I'm not sure exactly why this caused a problem. But it must have prevented the OAuth2.createService function from properly getting set.

AWS SSM Parameter Store: How can I edit multi-line "SecureString" values using the console?

Currently, I use a single SSM parameter to store a set of properties separated by newlines, like this:
property1=value1
property2=value2
property3=value3
(I am aware of the 4K size limit, it's fine.)
This works well, for normal String type parameters that store non-sensitive information like environment configuration, but I'd also like to do similar for secrets using the SecureString parameter type.
The problem is that I can't edit the parameter value in the console because it's using a HTML input field of type="password" that doesn't handle newlines.
The multi-line value works fine with the actual parameter store backend - I can set a value with multiple lines with the SSM API no problem and they can be read with the EC2 CLI properly too.
But I can't edit them using the console. This is a problem because the whole point of using a SecureString parameter is that I intend the only place to edit/view these secrets to be via the console (so that permissions are controlled and access is audited).
There's a few infrastructure workarounds I could implement (one parameter for each secret, store the secrets on S3 or other secret storing service, etc.) but they all have drawbacks - I'm just trying to find out if there's a way around this using the console?
Is there any way I can work around this and use the console to edit multi-line SecureString parameters?
Any kind of browser workaround or hack that I might be able to use to tell the browser to use a textarea instead of a "password" type field?
I'm using Chrome, but I'd be happy to work around this by using another browser or something (editing the secrets is pretty rare, and viewing multi-line values in the console works fine).
EDIT
After posting this question, AWS notified me there was a whole new "AWS Systems Manager" UI, but it still has the same problem - I tried the below browser hacks on this new UI, but no luck.
Failed browser hack attempt 1: I tried opening the browser console, running document.getElementById("Value").value = "value1\nvalue2" and then clicking the save button, which set the value I injectec, but the newline was filtered out.
Failed browser hack attempt 2: I tried using the browser instpector to change the element to a TextArea and then typed in two lines of input and clicked save, but that didn't set the value at all.
From https://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-file, I learned you can pass a file as parameter to the --value argument. So if your file is called secrets.properties, you can do this:
aws ssm put-parameter --type SecureString --name secrets --value file://secrets.properties
I found a way to do it, but it's too much effort and too weird - if anyone can find a simpler way, I will mark that as the answer.
The hacky workaround is to install the "Tamper Chrome" extension + app, then capture the XHR request as the browser sends it and edit the new lines into the JSON.
Blech. Plus "Tamper Chrome" is pretty awful, I don't want to run it on my machine.
This might be better to use the new secrets manager that was launched recently. The interface for it is very close to parameter store but it has better support for multiple parameters in one place.
I wonder if the change in the console was due to the expected release of the service since they have a pricing model around secrets whereas parameter store is free
In the end, I decided the answer to this question is "don't do that". Not that I would've wanted to hear that when I was trying to make it work.
You should use a separate SSM param per secret for these reasons:
ability to grant permissions at fine grained level; e.g. you have an API password for calling your service, and a DB password for the service talk to a DB - if you store them in the same secret you couldn't only grant access to the API password.
ability to track key access separately - the SSM access logs can only tell you that the target machine/user accessed the SSM param at that time, it won't be able to tell you which secret was accessed
ability to use separate KMS keys to encrypt
Just watch out for the fact that you can only request a max of 10 SSM params at a time.
if you want, you can try with my app https://github.com/ledongthuc/awssecretsmanagerui
I try to create it to easier to update multi-line values and binary easier. Hope it's helpful with your case.

How do I Uniquely identify an appx package

I'm working on a UWP app that will be installed on a device running Windows 10 IoT.
I need to be able to uniquely identify the appx package that corresponds to my app. I need something that is not going to change between builds and releases.
I am able to the following information from a web request to `http://insertIPAddressHere:8080/api/app/packagemanager/packages:
Ive removed the bits the parts that might be sensitive, but you can type Get-AppxPackage into powershell to get an idea of what the removed fields look like. The PackageFamilyName from powershell seems to correlate with the PackageRelativeId from the web request.
{
"InstalledPackages":[
...
{
"CanUninstall":true,
"Name":"removed",
"PackageFamilyName":"removed",
"PackageFullName":"removed",
"PackageOrigin":5,
"PackageRelativeId":"removed",
"Publisher":"removed",
"Version":{
"Build":68,
"Major":0,
"Minor":0,
"Revision":0
},
"RegisteredUsers":[
{
"UserDisplayName":"removed",
"UserSID":"removed"
}
]
}
]
}
I thought about hardcoding in the PackageRelativeId, but I'm not sure if that's an appropriate way to identify my app. It has what what looks like some randomly generated characters, and I haven't yet found anything that reassures me that value will remain the same between builds and revisions. I can't find it anywhere in my solution. It's only mentioned in some of the compiled files.
PackageFullName is your unique identifier in this case -- it will encode the package name, version, and publisher.
If you don't want the package version as part of your identifier, use PackageFamilyName.
You can see all this via powershell:
get-appxpackage | where name -eq "My.PackageName"

Cannot create notebook immediately after creating group (modern group not

With Microsoft Graph I would like to create a Group and then after that create a Notebook (onenote) in that group directory.
First I execute a HTTP call to :
POST /groups
with the required access token in the header and JSON object of the Group in the body. (http://graph.microsoft.io/en-us/docs/api-reference/beta/api/group_post_groups).
If successful it will return a JSON object with the complete property of the group.
So far there was no problem, I've managed to get the {ID} (in GUID) of the group which will be required to create a notebook. Let's say for this example the {ID} of my Group is 123456789-abcd-4321-bbbb-9876543210aaa
Next for the notebook I execute a HTTP call to :
POST /groups/{ID}/notes/notebooks
And then I got following JSON response :
"error": {
"code": "20160",
"message": "No modern group was found that matches the ID 123456789-abcd-4321-bbbb-9876543210aaa",
"innerError": {
"request-id": "85b85297-ad3b-424d-b18f-2b1da904f5fc",
"date": "2016-01-29T08:19:27"
}
I've tried so many things to get a workaround for this issue. One time I set a break point between the method to create group and the method to create notebook in my program. I ran my program until the point it has finished creating the group. Then I stop, and try to open notebook tab from Office365 website in the browser and I found this :
In English it means "We are creating your notebook. It may take a few minutes. We will finish this even if you close the browser". I took about 10 seconds before I was redirected to OneNote Online page (the url is my sharepoint tenant name).
After that I continue my program and suddenly it successfully created the notebook.
I need some help here! I need to create the notebook immediately after creating the group without having to open up a browser. I think it has something to do with sharepoint sites creation for the Group.
Any help would be appreciated!
Creating groups does take some time (usually ~ 5 minutes).
It is recommended that you first query to see if group has been created or not and then create the notebook.
you can query drive endpoint to find if group has been provisioned.
GET /beta/groups/{id}/drive