I am trying to deploy a scheduled azure web job. I have everything working except the job is being deployed as 'OnDemand'.
I am building and releasing using Visual Studio Team Services.
I have set up as follows:
The contents of the webjob-publish-settings.json are
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-
settings.json",
"webJobName": "testCIJob"
}
And the settings.job are:
{"schedule": "0 0/10 0 0 0 0"}
which I believe is every 10 minutes, every day.
I have also linked my web job to my web app and have a webjobs-list.json file with the content:
{
"$schema": "http://schemastore.org/schemas/json/webjobs-list.json",
"WebJobs": [
{
"filePath": "../AscendancyCF.CmaServiceWebJob/AscendancyCF.CmaServiceWebJob.csproj"
}
]
}
I have got this far searching the web but I find a lot of the information is rapidly out of date. an example
Also, I don't want to overload the question with information so if anyone needs more please just ask and I'll try and provide.
How do I get my web job to deploy on the schedule?
A similar question is here:How to deploy a webjob through CI in VSO with vNext.
Try the solutions in it:
You can use the cron expression to create the webjob scheduler if your
app is running in Basic or High mode. Refer to this link for details:
Create a scheduled WebJob using a CRON expression
Otherwise, you need to enable continues delivery of Azure Webjobs.
According to the Deploy Webjobs with Visual Studio article, the content of the webjob-publish-settings.json should be something like:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "WebJob1",
"startTime": "2014-06-23T00:00:00-08:00",
"endTime": "2070-06-27T00:00:00-08:00",
"jobRecurrenceFrequency": "Minute",
"interval": 10,
"runMode": "Scheduled"
}
That should run every 10 minutes until 2070. There is one note on the article tho that you might want to keep in mind:
If you configure a Recurring Job and set recurrence frequency to a
number of minutes, the Azure Scheduler service is not free. Other
frequencies (hours, days, and so forth) are free.
Related
The title sounds quite comprehensive, but my baseline question is quite simple, I guess.
Context
I Azure, I have an IoT hub, which I am sending messages to. I use a modified version one of the samples from the Azure Iot SDK for python.
Sending works fine. However, instead of a string, I send a JSON structure.
When I watch the events flowing into the IoT hub, using the Cloud shell, it looks like this:
PS /home/marcel> az iot hub monitor-events --hub-name weathertestiothub
This extension 'azure-cli-iot-ext' is deprecated and scheduled for removal. Please remove and add 'azure-iot' instead.
Starting event monitor, use ctrl-c to stop...
{
"event": {
"origin": "raspberrypi-zero-wh",
"payload": "{ \"timestamp\": \"1608643863720\", \"locationDescription\": \"Attic\", \"temperature\": \"21.941\", \"relhumidity\": \"71.602\" }"
}
}
Issue
The data seems fine, except the payload looks strange here. BUT, the payload is literally what I send from the device, using the SDK sample.
Is this the correct way to do it? At the end, I have a very hard time to actually get the data into the Time Series Insights model. So I guess, my structure is to be blamed.
Question
What is a recommended JSON data structure to send to the IoT hub for later use?
You should add the following 2 lines to your message in your python SDK sample:
msg.content_encoding = "utf-8"
msg.content_type = "application/json"
This should resolve your formatting concern.
We've also updated our samples to reflect this: https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/sync-samples/send_message.py
I ended up using the tip by #elhorton, but it was not the key change. Nonetheless, the formatting in the Azure Shell Monitor looks now much better:
"event": {
"origin": "raspberrypi-zero-wh",
"payload": {
"temperature": 21.543947753906245,
"humidity": 69.22964477539062,
"locationDescription": "Attic"
}
}
The key was:
include the message source time in ISO format
from datetime import datetime
timestampIso = datetime.now().isoformat()
message.custom_properties["iothub-creation-time-utc"] = timestampIso
Using the locationDescription as the Time Series ID Property See https://learn.microsoft.com/en-us/azure/time-series-insights/how-to-select-tsid (Maybe I could also have taken the iothub-connection-device-id, but I did not test that alone specifically)
I guess using "iothub-connection-device-id" will make "raspberrypi-zero-wh" as the name of the time series instance. I agree with your choice of using "locationDescription" as TSID; so Attic becomes the time series instance name, temperature and humidity will be your variables.
I recently did a deployment to WestEurope by mistake, I deleted the resources and thought I could redeploy to UKSouth, however whenever I try to redeploy I get the error below:
- At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details. (Code: DeploymentFailed)
- {
"error": {
"code": "InvalidDeploymentLocation",
"message": "Invalid deployment location 'uk south'. The deployment already exists in location 'westeurope'."
}
} (Code:Conflict)
CorrelationId: 8c2a4cd6-4409-46c3-9b7c-544134f0f942
The master template calls several nested templates and I'm having trouble trying to identify where the issue lies. I have checked and there's no soft delete enabled anywhere and also the resources have definitely been deleted from Azure.
Help..
Thanks in advance :)
Not Sure If this is resolved but here is what i did.
Go Azure Portal ->Subscription
On Left Pane under settings click on Deployments
Find the deployment which was done earlier and delete it manually
This should fix your problem.
I have created a script using Android studio. The script log into the app and do some insertion of data and open all fragments.
But when I add the script to my Robo test, it only get to the first screen, then waiting there for 5 minutes and finish with a Passed mark.
Anybody knows anything about this, please help.
I encountered the same problem and contacted Firebase support.
As per their response, it seems that the delay actions recorded in the Robo script by Android Studio (3.0.1) are extremely long (an hour or more). These long delays block the script execution.
For example the Robo script I recorded starts with this delay action -
{
"eventType": "DELAYED_MESSAGE_POSTED",
"timestamp": 1522050751149,
"actionCode": -1,
"delayTime": 3596480,
"canScrollTo": false,
"elementDescriptors": []
}
Notice that the "delayTime" set by Android Studio is 3596480 miliseconds which translates to 59.94 minutes. This value is incorrect.
The quick fix for this is to edit the script manually either by removing the faulty DELAYED_MESSAGE_POSTED events or by editing the "delayTime" values to something more realistic (5000 for example).
Firebase support says that this problem will be fixed in Android Studio 3.2 which is currently in canary.
I created an Azure Function App to send emails (uses service bus topics), and I have it working beautifully locally using their SDK/CLI tools, but when I publish it to Azure using the Visual Studio Publish options available, the function doesn't appear to run, there is no error, and the monitor shows "No Data Available". The only thing I can possibly think of is that perhaps the local.settings.json file which allows me to run the app locally needs to be manually entered some place into the function app?
Clicking Run next to function.json just tells me in the Logs "2017-12-01T16:59:21 Welcome, you are now connected to log-streaming service." no other information is presented. Also, I checked the topic and still have messages pending.
I have verified the files did publish successfully to the bin folder using Kudo, and the function.json (below) looks right to me. Does anyone have any ideas why this might not be triggered and isn't erroring? As a note, the function folder only has function.json in it, but up one level the bin folder and the dll shown in the json are there.
function.json:
{
"generatedBy": "Microsoft.NET.Sdk.Functions-1.0.0.0",
"configurationSource": "attributes",
"bindings": [
{
"type": "serviceBusTrigger",
"topicName": "topicemail-dev",
"subscriptionName": "subLowPriority",
"accessRights": "manage",
"name": "mySbMsg"
}
],
"disabled": false,
"scriptFile": "..\\bin\\Emailer.dll",
"entryPoint": "Emailer.Functions.LowEmail"
}
When deployed to Azure, Functions does not use local.settings.json. Instead, it reads values from the App Settings. All you need to do is add App Settings values for each of the properties you have in local.settings.json
For people with the same issue, but who still can't get it working with the selected answer, view Azure function implemented locally won't work in the cloud , it might help.
I've just started experimenting with Azure functions and I'm trying to understand how to control the app settings depending on environment.
In dotnet core you could have appsettings.json, appsettings.development.json etc. And as you moved between different environments the config would change.
However from looking at Azure function documentation all I can find is that you can set up config in the azure portal but I can't see anything about setting up config in the solution?
So what is the best way to manage build environment?
Thanks in advance :-)
The best way, in my opinion, is using a proper build and release system, like VSTS.
What I've done in one of my solutions is creating an ARM template of my Function App and deploy this using a release pipeline with VSTS RM.
This way you can just add a value to the template.json, like the one from below.
"appSettings": [
// other entries
{
"name": "MyValue",
"value": "[parameters('myValue')]"
}
You will need another file, called parameters.json which will hold the values. This file looks like so (at the moment).
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"name": {},
"storageName": {},
"location": {},
"subscriptionId": {}
}
}
Back in VSTS you can just change/override the values of these parameters in the portal.
By using such a workflow you will get a professional CI/CD implementation where no one has to bother themselves with the actual secrets. They are only known to the system administrators.