Azure Function does not execute in Azure (No Error) - json

I created an Azure Function App to send emails (uses service bus topics), and I have it working beautifully locally using their SDK/CLI tools, but when I publish it to Azure using the Visual Studio Publish options available, the function doesn't appear to run, there is no error, and the monitor shows "No Data Available". The only thing I can possibly think of is that perhaps the local.settings.json file which allows me to run the app locally needs to be manually entered some place into the function app?
Clicking Run next to function.json just tells me in the Logs "2017-12-01T16:59:21 Welcome, you are now connected to log-streaming service." no other information is presented. Also, I checked the topic and still have messages pending.
I have verified the files did publish successfully to the bin folder using Kudo, and the function.json (below) looks right to me. Does anyone have any ideas why this might not be triggered and isn't erroring? As a note, the function folder only has function.json in it, but up one level the bin folder and the dll shown in the json are there.
function.json:
{
"generatedBy": "Microsoft.NET.Sdk.Functions-1.0.0.0",
"configurationSource": "attributes",
"bindings": [
{
"type": "serviceBusTrigger",
"topicName": "topicemail-dev",
"subscriptionName": "subLowPriority",
"accessRights": "manage",
"name": "mySbMsg"
}
],
"disabled": false,
"scriptFile": "..\\bin\\Emailer.dll",
"entryPoint": "Emailer.Functions.LowEmail"
}

When deployed to Azure, Functions does not use local.settings.json. Instead, it reads values from the App Settings. All you need to do is add App Settings values for each of the properties you have in local.settings.json

For people with the same issue, but who still can't get it working with the selected answer, view Azure function implemented locally won't work in the cloud , it might help.

Related

AASA - Apple App Site Association - Not working

I have been having a long and frustrating experience trying to get AASA to work for webcredentials. My goal here is to allow usernames and passwords to be stored in the iOS keychain.
I did have this working on a root domain the other week but it is not sufficient for my scenario as I will explain. It didn't work for me straight away I have to say but it eventually started working after a clean build so I thought this was the issue then but now I am not so sure.
I am using Expo with EAS build. We have a multi-tenant application. From a single codebase we deploy to multiple apps in the store. All are on the same team ID but they are separate applications and use separate credentials, nothing is shared.
I am confident my apps textContentType of username and password on my TextFields is correct as this has not changed from when I managed to get it working originally and I have checked it countless times.
Expectation
For the "Save Password" prompt to be displayed after login. What I have noticed however is when going to store a password manually using "add password" via iCloudKeychain from the keyboard accessory this does accurately show the correct "TENANT_SUBDOMAIN.example.com". I find this confusing.
Goal Scenario
I am hosting a site on Netlify. I have it setup to support wildcard subdomains with a LetsEncrypt provisioned wildcard SSL certificate. I then have edge functions which change the content of my index.html and apple-app-site-association file dynamically based on the requested subdomain.
I have added the Associated Domains capability to my provisioning profile.
I am using the latest Expo 47 and EAS build. I have added in the appropriate associated domains configuration and I can see this when introspecting my entitlements under com.apple.developer.associated-domains and it is correct.
I am using TestFlight for testing. I am doing a --clean-build on EAS every time and I also increase the runtime version. I have also tried manually refreshing credentials outside of the build process which does this automatically. This must be using the correct provisioning profile otherwise I would get a build failure as the requested entitlements wouldn't match.
The AASA file is currently hosted just in the .well-known directory. I have tried using the root and also tried using both. There are no redirects taking place.
I am aware the AASA file is pulled on application installation and update. I repeatedly remove the apps and then reboot my phone in an attempt to reset any device caches.
The content-type of the file is application/json and I have confirmed this using developer tools in the browser.
There is no robots.txt or anything blocking the request from an infrastructure perspective. There are no additional firewalls or geo restricted access as I am just using plain Netlify to host this, nothing fancy.
I am confident the Team ID and bundle IDs are correct in the AASA file.
I remove the content-length header in the Edge function so it is correctly calculated by the network instead and I have confirmed this using curl.
When I check the file using https://app-site-association.cdn-apple.com/a/v1/site.example.com Apple has the correct file cached on it's CDN so I would expect it to work.
I added in an applinks section so I could use the Apple App Search API validation tool and the Branch.io AASA verification tool to verify correctness. Branch.io says the file is fine and Apple says it's fine also but because the App has not been deployed to the store yet I see Error no apps with domain entitlements. From what I can tell this is normal in development and makes sense as it uses the current released version of the app to verify the deep link configuration. So to me this means Apple can parse the file correctly.
When I stream my device console logs; on install I can see the AASA requesting the correct domains. I see no errors on swcd I just see the Beginning data task AASA-XXXX with the correct domains.
When I run Charles proxy on my phone with a verified SSL installation (also reinstalled a few times now) I do not see quite what I would expect - but the device logs seem to imply it is doing the correct thing. When I view the app-site-association... URL requests in Charles there is one per application install which is correct. The request is marked as Unknown and when I look at the request the host is shown but as you would expect from SSL I see no path. The info says METHOD: CONNECT with Error - Input Error: EOF. This is the only error I see, I am not sure if it is a red herring and something to do with Charles. Given the error as you expect there is no body in the request or response. It is worth noting in general testing I have no VPN enabled and I have do not have Private Relay enabled in my iOS settings.
When I perform a sysdiagnose I see the following at the timestamp in my console log in the swcutil_show.txt device log. This looks correct in comparison to other apps webcredentials and applinks services I see there and I see no errors:
Service: webcredentials
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x141816200> { v = 0, t = 0x8, u = 0x1e7c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x7c1e000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-09 14:14:32 +0000
Next Check: 2022-12-14 14:03:00 +0000
Service: applinks
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x13fd38d00> { v = 0, t = 0x8, u = 0x219c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x9c21000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
Patterns: {"/":"*"}
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-13 13:13:23 +0000
Next Check: 2022-12-18 13:01:51 + 0000
At end of file:
MYTEAMID.com.cf.example.b2c.ios: 8 bytes
(This seems correct for all apps)
Other Scenario
I have tried setting this up using an apex on another domain which hasn't been seen before by Apple. I have tried using a subdomain with a root domain serving the same content and I have tried the subdomain and root domain on their own. I have also tried not using the Edge functions and having static files but to no avail.
When I do this I ensure I wait for the Apple CDN to catch up and remove/add entries prior to deleting the apps, rebooting my device, and reinstalling to test.
AASA File
AASA content comes back with the correct payload and Content-Type: application/json and Content-Length headers, both from Apples CDN and the origin. When I had this somehow working in my initial test it was on a root domain and I did not have an applinks section, this was only added so I could use the verification tools for universal links.
I am not sending back different content or duplicated content and I block the www subdomain - I have also tried it with a www subdomain for the record.
{
"applinks": {
"details": [
{
"appIDs": [
"MYTEAMID.com.cf.example.b2c.ios"
],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL with a fragment that equals no_universal_links and instructs the system not to open it as a universal link."
}
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
I have also tried this with the older format:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "MYTEAMID.com.cf.example.b2c.ios",
"paths": [
"*"
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
associatedDomains iOS. expo config
associatedDomains: [
`webcredentials:${SUBDOMAIN}.example.app`,
`applinks:${SUBDOMAIN}.example.app`,
],
Help :)
I have been trying to get this to work for a long time now and I am completely out of ideas. If anybody has any suggestions I would really appreciate it. I am very confused how the devices request seems correct and the CDN content is correct but it is still not working. It's worth also reiterating that I need to have different subdomains for each tenant as the credentials must not be shared across apps so the keychain->domain association store must be different.
I am wondering if it's the LetsEncrypt wildcard SSL certificate but I wouldn't expect it to verify and for Apple to cache the file if this was the case. It seems very unlikely to me but it is the only thing I haven't tried at this point.
Many Thanks,
Mark

Autodesk Forge Configurator Inventor - Azure deployment problem

I have troubles with deploying an app to Azure.
I started with https://github.com/Autodesk-Forge/forge-configurator-inventor repo. I managed to run it locally with no errors. I am able to login, upload my own zipped files, change parameters, export pdf and download it. Everything is fine. Now i want to publish app to azure.
App is currently running so You can check it out: https://pjk-config.azurewebsites.net
WHAT IS WRONG: I cannot upload any models after login. No error is displayed. If I make change in wrench or wheel model and update it, I won't happened either.
What I did:
created azure account,
changed callback url to my app (in my situation: "https://pjk-config.azurewebsites.net/"),
I changed WebApplication.Program.cs by removing the UseKestrel() statement ( please check that)
{
webBuilder.UseStartup<Startup>();
var port = Environment.GetEnvironmentVariable("PORT");
// If deployed to a service like Heroku, need to listen on port defined in the environment, not the default one
if (!string.IsNullOrEmpty(port))
{
webBuilder.UseUrls("http://*:" + port);
Log.Logger.Information($"PORT environment variable defined to:{port}");
}
});
appsettings:
inviteonlymode - false
embedded mode - false
publisher settings: (but I see polling in output so I think something is missing)
"CompletionCheck": "Callback",
"CallbackUrlBase": "https://pjk-config.azurewebsites.net"
I deployed through VS 2019 with WebApplication right click - publish using this reference:
https://learn.microsoft.com/en-us/visualstudio/deployment/quickstart-deploy-to-azure?view=vs-2019
If you need any additional info just let me know. I am fighting with this almost 30 days by my own. I am beginner and this is my first question on this page so I apologize for lack of precise information about my problem. Just tell me what you need and I will send it over.
thank you for your effort and help. I figured out how to deploy to azure and be able to run without bugs. It was about callback. In my situation Callback URL at my apps>Autodesk Forge should be https://myapp.azurewebsites.net (no slash at the end) and in the appsettings.json i went with that:
"Publisher": {
"CompletionCheck": "Polling",
"CallbackUrlBase": "https://myapp.azurewebsites.net/"
Notice the slash at the end.
Probably the next step will be changing CompletionCheck to Callback.
App is running and I can work on inventor part.
Thanks!

Google Drive Rest API - How to check if file has changed

Is there a reliable way, short of comparing full contents, of checking if a file was updated/change in Drive?
I have been struggling with this for a bit. Here's the two things I have tried:
1. File version number
I upload a plain text file to Google Drive (simple upload, update endpoint), and save the version from the file metadata returned after a successful upload.
Then I poll the Drive API (get endpoint) occasionally to check if the version has changed.
The trouble is that within a second or two of uploading the file, the version gets bumped up again.
There are no changes to the file content. The file has not been opened, viewed, or even downloaded anywhere else. Still, the version number increases from what it was after the upload.
To my code this version number change indicates that the remote file has been changed in Drive, so it downloads the new version. Every time!
2. The Changes endpoints
As an alternative I tried using the Changes api.
After I upload the file, I get a page token using changes.getStartPageToken or changes.list.
Later I use this page token to poll the Changes API for changes, and filter the changes for the fileId of uploaded file. I use these options when polling for changes:
{
"includeRemoved": false
"restrictToMyDrive": true
"spaces": "drive"
}
Here again, there is the same problem as with the version number. The page token returned immediately after uploading the file changes again within a second or two. The new page token shows the uploaded file having been changed.
Again, there is no change to the content of the file. It hasn't been opened, updated, downloaded anywhere else. It isn't shared with anyone else.
Yet, a few seconds after uploading, the file reappears in the changes list.
As a result, the local code redownloads the file from Drive, assuming remote changes.
Possible workaround
As a hacky hook, I could wait a few seconds after the file upload before getting the new file-version/changes-page-token. This may take care of the delayed version increment issue.
However, there is no documentation of what is causing this phantom change in version number (or changes.list). So, I have no sure way of knowing:
How long a wait is safe enough to get a 'settled' version number without losing possible changes by other users/apps?
Whether the new (delayed) version number will be stable, or may change again at any time for no reason?
Is there a reliable way, short of comparing full contents, of checking if a file was updated/change in Drive?
You can try using the md5Checksum property of the File resource object, if your file is not a Google Doc file (ie. binary). You should be able to use that to track changes to the contents of your binary files.
You might also be able to use the Revisions API.
The Revisions resource object also has a md5Checksum property.
As a workaround, how about using Drive Activity API? I think that there are several answers for your situation. So please think of this as just one of them.
When Drive Activity API is used, the activity information about the target file can be retrieved. For example, from ActionDetail, you can see whether the target file was edited, renamed, deleted and so on.
The sample endpoint and request body are as follows.
Endpoint:
POST https://driveactivity.googleapis.com/v2/activity:query?fields=activities%2CnextPageToken
Request body:
{"itemName": "items/### fileId of target file ###"}
Response:
Sample response is as follows. You can see the information from this. The file with the fileId and filename was edited at the timestamp.
{
"activities": [
{
"primaryActionDetail": {
"edit": {} <--- If the target file was edited, this property is added.
},
"actors": [
{
"user": {
"knownUser": {
"personName": "people/### userId who edited the target file ###",
"isCurrentUser": true
}
}
}
],
"actions": [
{
"detail": {
"edit": {}
}
}
],
"targets": [
{
"driveItem": {
"name": "items/### fileId of target file ###",
"title": "### filename of target file ###",
"file": {},
"mimeType": "### mimeType of target file ###",
"owner": {
"user": {
"knownUser": {
"personName": "people/### owner's userId ###",
"isCurrentUser": true
}
}
}
}
}
],
"timestamp": "2000-01-01T00:00:0.000Z"
},
],
"nextPageToken": "###"
}
Note:
When you use this API in my environment, please enable Drive Activity API at API console and include https://www.googleapis.com/auth/drive.activity.readonly in the scopes.
Although when I used this API, I felt that the response was fast, if the response was slow when you use this, I apologize.
References:
Google Drive Activity API
ActionDetail
If this was not what you want, I apologize.
What you are seeing is the eventual consistency feature of the Google Drive filesystem. If you think about search, it doesn't matter how quickly a search index is updated, only that it is eventually updated and is very efficient for reading. Google Drive works on the same premise.
Drive acknowledges your updates as quickly as possible. Long before those updates have propagated to all worldwide copies of your file. Derived data (eg. timestamps and I think I recall, md5sums) are also calculated after the update has "completed".
The solution largely depends on how problematic the redundant syncs are to your app.
The delay of a few seconds is enough to deal with the vast majority of phantom updates.
You could switch to the v2 API and use etags.
You could implement your own version number using custom properties. So every time you sync up, you increment your own version number. You only sync down if the application version number has changed.

Is it possible to create multiple Chrome Hosted Apps for the same domain?

Our product has both a free component, and a full featured, subscription based web application. I've created a Chrome Hosted App - essentially an installable bookmark - for each both of those parts of our product.
The interesting parts of the app manifests are as follow:
"manifest_version": 2,
"app": {
"launch": {
"container": "tab",
"web_url": "https://paydirtapp.com/dashboard"
}
}
"manifest_version": 2,
"app": {
"launch": {
"container": "tab",
"web_url": "https://paydirtapp.com/free_invoice_creator"
}
}
I can install the free invoice creator app, and the full featured app, but not both at the same time.
Attempting to do so (in Chrome 26.0.1410.10 (Official Build 183151) dev) causes the following error message:
"An error has occurred. Could not add the application because it conflicts with "Free Invoice Maker".
The only reference I can find to this issue is in https://developers.google.com/chrome/apps/docs/developers_guide#manifest, where they state the following:
Important: If you provide multiple apps, avoid overlapping URLs. If a user tries to install an app whose "web_url" or "urls" values overlap with those of an already installed app, the second installation will fail due to URL conflict errors. For example, an app that specifies a "urls" value of "http://mail.example.com/" would conflict with an app that specifies "http://mail.example.com/mail/".
Previously, my web_url value was just set to https://paydirtapp.com/, which caused the same error. I expected that updating it so that it wasn't a substring of the other app would solve the problem, but it hasn't.
Does anyone know if it's possible to have multiple Chrome Hosted Apps where the web_url is for the same domain?
Answer from Moshe Matz (copy from comment):
Using separate subdomains for each app should work.
For example, use https://dashboard.paydirtapp.com and https://free_invoice_creator.paydirtapp.com. You will likely need a new SSL certificate that contains both of those names.
Separate subdomains should work. We don't currently have a solution for the same domain case.

Start an external application from a Google Chrome Extension?

How to start an external application from a Google Chrome Extension?
So basically I have an executable file which does the job when you launch it. I need to be able to start it without a window (it is a console application) and pass the current URL to it in an argument,
Previously, you would do this through NPAPI plugins.
However, Google is now phasing out NPAPI for Chrome, so the preferred way to do this is using the native messaging API. The external application would have to register a native messaging host in order to exchange messages with your application.
You can't launch arbitrary commands, but if your users are willing to go through some extra setup, you can use custom protocols.
E.g. you have the users set things up so that some-app:// links start "SomeApp", and then in my-awesome-extension you open a tab pointing to some-app://some-data-the-app-wants, and you're good to go!
Native messaging host
Chrome-extensions
{
"name": "AppName",
"description": "",
"version": "1.0",
"manifest_version": 3,
"permissions": [
"nativeMessaging" // 👈 https://developer.chrome.com/docs/extensions/mv3/declare_permissions/
]
// ...
}
Host
Add schema
#echo off
:: If you add "/f" then you can force write.
REG ADD "HKCU\Software\Google\Chrome\NativeMessagingHosts\com.my_company.my_application" ^
/ve /t REG_SZ ^
/d "%~dp0Mymanifest.json"
// Mymanifest.json
{
"name": "com.my_company.my_application",
"description": "My Application",
"path": "relative_dir/my.exe",
"type": "stdio",
"allowed_origins": [
"chrome-extension://nbjjflbnekmabedahdolabcpahfjojjb/"
]
}
chrome.runtime.sendNativeMessage
example:
// your.js
chrome.runtime.sendNativeMessage("com.my_company.my_application",
{key1: "value1", key2: "value2"}, // 👈 Send those parameters to your program.
(response) => {
console.log(response)
}
)
Example repository
I have created a project thunder/e11fde9 whose ultimate goal is to be able to use a browser as input and then open a specified file locally (without a mouse, if possible)
It is still in development, but I think the early code is enough. The link is below.
chrome-ext A test Chrome extension.
go I built an EXE with Golang and put it on the host/bin to simulate the local program.
host Install the schema and specify the program. (see manifest.json)
Which already has a log that records the results of the browser's transmission, while the browser can also get the program's return value.
Reference
GoogleChrome/chrome-extensions-samples
It's useful, and it provides a way to use python to communicate.
jfarleyx/chrome-native-messaging-golang
use golang to communicate.
I go for hypothesys since I can't verify now.
With Apache, if you make a php script on your local machine calling your executable, and then call this script via POST or GET via html/javascript?
would it function?
let me know.