find pricing information for a GCE instance given machine type, region and commitment type - google-compute-engine

I use the GCP metadata API (http://metadata.google.internal/computeMetadata/v1/) to get information about the instance that a process is running on, including machine type (e.g. "projects/818238156224/machineTypes/n1-standard-4" -- presumably the important part is the "n1-standard-4"), region, zone, and whether the instance is preemptible.
I would like to be able to retrieve information programmatically about how much GCP is charging (e.g. per hour) for usage of the instance.
I can query the GCP billing API (https://cloudbilling.googleapis.com/v1/services/6F81-5844-456A/skus), but that returns JSON like
{
"name": "services/6F81-5844-456A/skus/0048-21CE-74C3",
"skuId": "0048-21CE-74C3",
"description": "Preemptible N2 Custom Instance Core running in Sao Paulo",
"category": {
"serviceDisplayName": "Compute Engine",
"resourceFamily": "Compute",
"resourceGroup": "CPU",
"usageType": "Preemptible"
},
"serviceRegions": [
"southamerica-east1"
],
"pricingInfo": [
{
"summary": "",
"pricingExpression": {
"usageUnit": "h",
"usageUnitDescription": "hour",
"baseUnit": "s",
"baseUnitDescription": "second",
"baseUnitConversionFactor": 3600,
"displayQuantity": 1,
"tieredRates": [
{
"startUsageAmount": 0,
"unitPrice": {
"currencyCode": "USD",
"units": "0",
"nanos": 11538000
}
}
]
},
"currencyConversionRate": 1,
"effectiveTime": "2021-05-26T08:47:05.220Z"
}
],
"serviceProviderName": "Google",
"geoTaxonomy": {
"type": "REGIONAL",
"regions": [
"southamerica-east1"
]
}
}
And it's very unclear how to retrieve an objects in one API given an object in the other.
Do I need to parse the description somehow? Does that even work? Is there a better way?

Related

Looking for a way to filter data withing Azure API call

I am looking for a way to extract data out of an azure environment. Problem i'm currently having is that when I use my API call I receive about 60 lines of json while I only need 4 of those lines. To reduce load, increase efficiency and remove the need for parsing withing the other environment where I need to data, I want to find a way to filter the data in the api call. Currently my call looks like this.
https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Web/sites/{application or resource}/providers/microsoft.insights/metrics?api-version=2021-05-01&metricnames=IoWriteBytesPerSecond,IoReadBytesPerSecond&timeSpan=PT1M
now the ouput looks something like this.
{
"cost": 0,
"timespan": "2022-10-11T10:18:00Z/2022-10-11T10:19:00Z",
"interval": "PT1M",
"value": [
{
"id": "/subscriptions//resourceGroups//providers/Microsoft.Web/sites//providers/Microsoft.Insights/metrics/IoWriteBytesPerSecond",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "IoWriteBytesPerSecond",
"localizedValue": "IO Write Bytes Per Second"
},
"displayDescription": "The rate at which the app process is writing bytes to I/O operations. For WebApps and FunctionApps.",
"unit": "BytesPerSecond",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2022-10-11T10:18:00Z",
"total": 288.0
}
]
}
],
"errorCode": "Success"
},
{
"id": "/subscriptions//resourceGroups//providers/Microsoft.Web/sites//providers/Microsoft.Insights/metrics/IoReadBytesPerSecond",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "IoReadBytesPerSecond",
"localizedValue": "IO Read Bytes Per Second"
},
"displayDescription": "The rate at which the app process is reading bytes from I/O operations. For WebApps and FunctionApps.",
"unit": "BytesPerSecond",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2022-10-11T10:18:00Z",
"total": 284.0
}
]
}
],
"errorCode": "Success"
}
],
"namespace": "Microsoft.Web/sites",
"resourceregion": "westeurope"
}
Out of all these lines I only need about 4 objects, Is it possible to use the $filter function within the URL api call? If yes, can someone redirect me to a forum, doc or example where this is used?
Thanks, regards

FIWARE - Orion Context Broker as Context Provider

I'm having a hard time understanding how context providers work in the Orion Context Broker.
I followed the examples in the step-by-step guide written by Jason Fox. However, I still do not exactly get what happens in the background and how the context broker exactly creates the POST from the registration. Here is what I am trying to do:
I do have a WeatherStation that provides sensor data for a neighborhood.
{
"id": "urn:ngsi-ld:WeatherStation:001",
"type": "Device:WeatherStation",
"temperature": {
"type": "Number",
"value": 20.5,
"metadata": {}
},
"windspeed": {
"type": "Number",
"value": 60.0,
"metadata": {}
}
}
Now I like the WeatherStation to be a context provider for all buildings.
{
"id": "urn:ngsi-ld:building:001",
"type": "Building"
}
Here is the registration that I try to use.
{
"id": null,
"description": "Random Weather Conditions",
"provider": {
"http": {
"url": "http://localhost:1026/v2"
},
"supportedForwardingMode": "all"
},
"dataProvided": {
"entities": [
{
"id": "null",
"idPattern": ".*",
"type": "Building",
"typePattern": null
}
],
"attrs": [
"temperature",
"windspeed"
],
"expression": null
},
"status": "active",
"expires": null,
"forwardingInformation": null
}
The context broker accepts both entities and the registration without any error.
Since I have a multi-tenant setup I use one fiware_service for the complete neighborhood but every building would later have a seperate fiware_servicepath. Hence, the weatherstation has a different servicepath than the building. Although I also tried to put them both on the same path.
For now I used the same headers for all entities.
{
"fiware-service": "filip",
"fiware-servicepath": "/testing"
}
Here is the log of the context broker (version: 3.1.0):
INFO#2021-09-23T19:17:17.944Z logTracing.cpp[212]: Request forwarded (regId: 614cd2b511c25270060d873a): POST http://localhost:1026/v2/op/query, request payload (87 bytes): {"entities":[{"idPattern":".*","type":"Building"}],"attrs":["temperature","windspeed"]}, response payload (2 bytes): [], response code: 200
INFO#2021-09-23T19:17:17.944Z logTracing.cpp[130]: Request received: POST /v2/op/query?options=normalized%2Ccount&limit=1000, request payload (55 bytes): {"entities": [{"idPattern": ".*", "type": "Building"}]}, response code: 200
The log says that it receives the request and forwards it as expected. However, as I understand it this would simply point to the same building entity again. Hence, it is somehow a circular forwarding. I also cannot tell anything about the headers of the request.
I do not understand how the forwarded request from the building can actually query the weather station for information. When I query my building I still only receive the entity with no own properties:
{
"id": "urn:ngsi-ld:building:001",
"type": "Building"
}
I also tried to vary the url of the registration but with no success.
Is this scenario actually possible with the current implementation? It would be very useful
Is there any example for this including also the headers?
I know that I could simply use reference but that would put more work on the user.
Thanks for any help on this.
It is messy, but you could achieve this via a subscription. Hold the weather station as a separate entity in the context broker and poll or push updates into the entity. The subscription would fire whenever the data changes and make two NGSI requests:
Find all entities which have a Relationship servicedBy=WeatherStationX
Run an upsert on all entities to add a Property to each entity:
{
"temperature" : {
"type" : "Property",
"value" : 7,
"unitCode": "CEL",
"observedAt": "XXXXX",
"providedBy": "WeatherStation1"
}
}
Where observedAt comes either from the payload of the weather station or the notification timestamp.
Within the existing IoT Agents, provisioning the link attribute allows a device to propagate measures to a second entity (e.g. this Thermometer entity is measuring temperature for an associated Building entity)
{
"entity_type": "Device",
"resource": "/iot/d",
"protocol": "PDI-IoTA-UltraLight",
..etc
"attributes": [
{"object_id": "l", "name": "temperature", "type":"Float",
"metadata":{
"unitCode":{"type": "Text", "value" :"CEL"}
}
}
],
"static_attributes": [
{
"name": "controlledAsset",
"type": "Relationship",
"value": "urn:ngsi-ld:Building:001",
"link": {
"attributes": ["temperature"],
"name": "providedBy",
"type": "Building"
}
}
]
}
At the moment the logic just links direct one-to-one, but it would be possible to raise a PR to check for an Array and update multiple entities in an upsert - the relevant section of code is here

How can I see logs of the JSON post bodies sent by zapier to my CRM (Current RMS) via the Webhook zap during setup and testing?

I'm trying to send new users / new customres of my WooCommerce store into the rental management app current-rms.com as new Organisations / new contacts. Since Current RMS does not have a native Zap, I am trying to use the generic Webhook zap that Zapier maintains.
Specifically, I'd like to see the sent JSON body in Zapier posts that I make during the setup and testing of the Zap after clicking "Make a Zap!". The Task History is not detailed enough nor does it show hits during test and setup, since it's not live yet.
My trigger is a WooCommerce New Customer. This is working with Zapier WooCommerce Plugin and webhooks OK.
My action is the generic Zapier "Webhooks" Zap. The label "instant" appears next to it in the list at /app/zaps and it is "off".
One version uses JSON PAYLOAD as the action.
Another version uses CUSTOM PAYLOAD as the action.
Wrap request in array is YES.
Unflatten is YES.
My API key and subdomain are in the app URL as query strings and working OK.
When I hit test I get:
We had trouble sending your test through.
The app returned "Invalid JSON - missing or invalid entry for 'member'". This usually happens when your Zap is missing a required field or a field value isn't in a recognized format.
We made a request to api.current-rms.com and received (400) Bad Request.
Official docs are at: https://api.current-rms.com/doc#members-members-post
Logging available at Current RMS side
Part of the authentication of Current RMS involves knowing the domain of the account you are trying to access, in my case its therockfactory due to it being an account for my company https://therockfactory.net/
https://api.current-rms.com/api/v1/members?apikey=APIKEYCENSORED&subdomain=therockfactory
which returns the following when I use the correct API key:
{"webhook_logs":[],"meta":{"total_row_count":0,"row_count":0,"page":1,"per_page":20}}
Maybe if I could see the actual hit that Zapier is posting to Current I could wrap my confused brain around it better? What me worry.
The hit should look somewhat similar to this example, but I've not been able to locate it so far... (in Zapier)
Headers
Content-Type: application/json
Body
{
"member": {
"name": "Chris Bralton",
"description": "Pictures and leaned back was strewn at one would rather more. People don't want of his own means of one hand! Unless it from our pioneer has he fallen tree but that ever stronger and a. Hid among us against the full of verdure through by my eyes.",
"active": true,
"bookable": false,
"location_type": 0,
"locale": "en-GB",
"membership_type": "Contact",
"lawful_basis_type_id": 10001,
"sale_tax_class_id": 1,
"purchase_tax_class_id": 1,
"tag_list": [
"[\"Red\", \"Blue\", \"Green\"]"
],
"custom_fields": {},
"membership": {},
"primary_address": {
"name": "Chris Branson",
"street": "16 The Triangle",
"postcode": "NG2 1AE",
"city": "Nottingham",
"county": "Nottinghamshire",
"country_id": "1",
"country_name": "United Kingdom",
"type_id": 3001,
"address_type_name": "Primary",
"created_at": "2015-06-29T10:00:00.000Z",
"updated_at": "2015-06-29T10:30:00.000Z"
},
"emails": [
{
"address": "abigail.parker#ggmail.co.uk",
"type_id": 4001,
"email_type_name": "Work",
"id": 1
}
],
"phones": [
{
"number": "+44 115 9793399",
"type_id": 6001,
"phone_type_name": "Work",
"id": 1
}
],
"links": [
{
"address": "www.facebook.com/profile.php?id=566828251",
"type_id": 5002,
"link_type_name": "Facebook",
"id": 1
}
],
"addresses": [
{
"name": "Chris Branson",
"street": "16 The Triangle",
"postcode": "NG2 1AE",
"city": "Nottingham",
"county": "Nottinghamshire",
"country_id": "1",
"country_name": "United Kingdom",
"type_id": 3002,
"address_type_name": "Billing",
"created_at": "2017-06-29T10:00:00.000Z",
"updated_at": "2017-06-29T10:30:00.000Z",
"id": 1
}
],
"service_stock_levels": [
{
"item_id": 10,
"store_id": 1,
"member_id": 1,
"asset_number": "Chris Bralton",
"serial_number": "",
"location": "",
"stock_type": 3,
"stock_category": 60,
"quantity_held": "1.0",
"quantity_allocated": "0.0",
"quantity_unavailable": "0.0",
"quantity_on_order": "0.0",
"starts_at": "",
"ends_at": "",
"icon": {
"iconable_id": 85,
"id": 1,
"image_file_name": "abigail.jpeg",
"url": "https://s3.amazonaws.com/current-rms-development/64a0ccd0-5fbd-012f-2201-60f847290680/icons/46/original/abigail.jpeg",
"thumb_url": "https://s3.amazonaws.com/current-rms-development/64a0ccd0-5fbd-012f-2201-60f847290680/icons/46/thumb/abigail.jpeg",
"created_at": "2015-06-29T10:00:00.000Z",
"updated_at": "2015-06-29T10:30:00.000Z",
"iconable_type": "StockLevel"
},
"custom_fields": {},
"id": 487,
"item_name": "Sound Engineer",
"store_name": "Nottingham",
"stock_type_name": "Service",
"stock_category_name": "Resource"
}
],
"day_cost": "",
"hour_cost": "",
"distance_cost": "",
"flat_rate_cost": "",
"icon": {
"image": ""
},
"child_members": [
{
"relatable_id": 317,
"relatable_type": "Member",
"related_id": 25,
"related_type": "Member"
}
],
"parent_members": [
{
"relatable_id": 317,
"relatable_type": "Member",
"related_id": 25,
"related_type": "Member"
}
]
}
}
UPDATE: After reading my chosen answer I was able to see what Zapier was sending:
[
{
"member[emails_attributes][0][address]": "test#test.co.nz",
"member[membership_type]": "Organisation",
"member[name]": "Testafari Testing"
}
]
You can send your webhook to a tool like this one to inspect the payloads that are being sent from anywhere on the internet: https://requestbin.com/
You can find more help in regards to using Webhooks by Zapier and other ideas on how you can troubleshoot issues stemming from its use: https://zapier.com/apps/webhook/help#inspect-the-requests

How to check if name already exists? Azure Ressource Manager Template

is it possible to check, in an ARM Template, if the name for my Virtual Machine already exists?
I am developing a Solution Template for the Azure Marketplace. Maybe it is possible to set a paramter in the UiDefinition uniqe?
The goal is to reproduce this green Hook
A couple notes...
VM Names only need to be unique within a resourceGroup, not within the subscription
Solution Templates must be deployed to empty resourceGroups, so collisions with existing resources aren't possible
For solution templates the preference is that you simply name the VMs for the user, rather than asking - use something that is appropriate for the workload (e.g. jumpbox) - not all solutions do this but we're trying to improve that experience
Given that it's not likely we'll ever build a control that checks for naming collisions on resources without globally unique constraints.
That help?
This looks impossible, according to the documentation.
There are no validation scenarious.
I assume that you should be using the Microsoft.Common.TextBox UI element in your createUiDefinition.json.
I have tried to reproduce a green check by creating a simple createUiDefinition.json as below with a Microsoft.Common.TextBox UI element as shown below.
{
"$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json",
"handler": "Microsoft.Compute.MultiVm",
"version": "0.1.2-preview",
"parameters": {
"basics": [
{
"name": "textBoxA",
"type": "Microsoft.Common.TextBox",
"label": "VM Name",
"defaultValue": "",
"toolTip": "Please enter a VM name",
"constraints": {
"required": true
},
"visible": true
}
],
"steps": [],
"outputs": {}
}
}
I am able to reproduce the green check beside the VM Name textbox as shown below:
However, this green check DOES NOT imply the VM Name is Available.
This is because based on my testing, even if I use an existing VM Name in the same subscription, it is still showing the green check.
Based on the official documented constraints that are supported by the Microsoft.Common.TextBox UI element, it DOES NOT VALIDATE Name Availability.
Hope this helps!
While bmoore's point is correct that it's unlikely you would ever need this for a VM (nor is there an API for it), there are other compute resources that do have global naming requirements.
As of 2022 this concept is possible now with the use of the ArmApiControl UI element. It allows you to call ARM apis as part of validation in the createUiDefinition.json. Here is an example using the check name API for an Azure App service.
{
"$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#",
"handler": "Microsoft.Azure.CreateUIDef",
"version": "0.1.2-preview",
"parameters": {
"basics": [
{}
],
"steps": [
{
"name": "domain",
"label": "Domain Names",
"elements": [
{
"name": "domainInfo",
"type": "Microsoft.Common.InfoBox",
"visible": true,
"options": {
"icon": "Info",
"text": "Pick the domain name that you want to use for your app."
}
},
{
"name": "appServiceAvailabilityApi",
"type": "Microsoft.Solutions.ArmApiControl",
"request": {
"method": "POST",
"path": "[concat(subscription().id, '/providers/Microsoft.Web/checknameavailability?api-version=2021-02-01')]",
"body": "[parse(concat('{\"name\":\"', concat('', steps('domain').domainName), '\", \"type\": \"Microsoft.Web/sites\"}'))]"
}
},
{
"name": "domainName",
"type": "Microsoft.Common.TextBox",
"label": "Domain Name Word",
"toolTip": "The name of your app service",
"placeholder": "yourcompanyname",
"constraints": {
"validations": [
{
"regex": "^[a-zA-Z0-9]{4,30}$",
"message": "Alphanumeric, between 4 and 30 characters."
},
{
"isValid": "[not(equals(steps('domain').appServiceAvailabilityApi.nameAvailable, false))]",
"message": "[concat('Error with the url: ', steps('domain').domainName, '. Reason: ', steps('domain').appServiceAvailabilityApi.reason)]"
},
{
"isValid": "[greater(length(steps('domain').domainName), 4)]",
"message": "The unique domain suffix should be longer than 4 characters."
},
{
"isValid": "[less(length(steps('domain').domainName), 30)]",
"message": "The unique domain suffix should be shorter than 30 characters."
}
]
}
},
{
"name": "section1",
"type": "Microsoft.Common.Section",
"label": "URLs to be created:",
"elements": [
{
"name": "domainExamplePortal",
"type": "Microsoft.Common.TextBlock",
"visible": true,
"options": {
"text": "[concat('https://', steps('domain').domainName, '.azurewebsites.net - The main app service URL')]"
}
}
],
"visible": true
}
]
}
],
"outputs": {
"desiredDomainName": "[steps('domain').domainName]"
}
}
}
You can copy the above code and test it in the createUiDefinition.json sandbox azure provides.

Data Factory: AzureSQL in- and output for pipeline activity type AzureMLBatchExecution

In Azure Data Factory, I’m trying to call an Azure Machine Learning model by a Data Factory Pipeline. I want to use a Azure SQL table as input and another Azure SQL table for the output.
First I deployed a Machine Learning (classic) web service. Then I created an Azure Data Factory Pipeline, using a LinkedService (type= ‘AzureML’, using Request URI and API key of the ML-webservice) and a input and output dataset (‘AzureSqlTable’ type).
Deploying and Provisioning is succeeded. The pipeline starts as scheduled, but keeps ‘Running’ without any result. The pipeline activity is not being shown in the Monitor&Manage: Activity Windows.
On different sites and tutorials, I only find JSON-scripts using the activity type ‘AzureMLBatchExecution’ with BLOB in- and outputs. I want to use AzureSQL in- and output but I can’t get this working.
Can someone provide a sample JSON-script or tell me what’s possibly wrong with the code below?
Thanks!
{
"name": "Predictive_ML_Pipeline",
"properties": {
"description": "use MyAzureML model",
"activities": [
{
"type": "AzureMLBatchExecution",
"typeProperties": {},
"inputs": [
{
"name": "AzureSQLDataset_ML_Input"
}
],
"outputs": [
{
"name": "AzureSQLDataset_ML_Output"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 3,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Week",
"interval": 1
},
"name": "My_ML_Activity",
"description": "prediction analysis on ML batch input",
"linkedServiceName": "AzureMLLinkedService"
}
],
"start": "2017-04-04T09:00:00Z",
"end": "2017-04-04T18:00:00Z",
"isPaused": false,
"hubName": "myml_hub",
"pipelineMode": "Scheduled"
}
}
With a little help from a Microsoft technician, I've got this working. The JSON script as mentioned above is only changed in the schedule-section:
"start": "2017-04-01T08:45:00Z",
"end": "2017-04-09T18:00:00Z",
A pipeline is active only between its start time and end time. Because the scheduler is set to weekly, the pipeline is triggered at the start of the week: that date should be within start- and end date. For more details about scheduling, see: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-scheduling-and-execution
The Azure SQL Input dataset should look like this:
{
"name": "AzureSQLDataset_ML_Input",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "SRC_SQL_Azure",
"typeProperties": {
"tableName": "dbo.Azure_ML_Input"
},
"availability": {
"frequency": "Week",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
I added the external and policy properties to this dataset (see script above) and after that, it worked.