Azure timer function that calls a webapi - function

I have to create timer function that will call particular API for every specific duration.
I don't want to use any console application for that.
In short, I have to call webapi every 10 min.

You can also consider using Azure Logic Apps. It provides declarative way to schedule tasks and workflows.
https://learn.microsoft.com/en-us/azure/connectors/connectors-native-recurrence

Even your post isn't realy a question, you probably look for Azure Scheduler.
Using Azure Scheduler, you don't even have to write a function, just enter the URL you wan't to call and specify the trigger.

Related

Possible to change a datasource property in Power App dynamically?

I'm trying to work through a proof of concept app that needs to talk to a 3rd party API. The initial step is to get a token from the 3rd party, which I've done by creating a custom connection which uses no security. Once I get the token, I need to make additional queries to this 3rd party API, each of which will require the token being passed. So, I've created a second custom connector which uses API KEY for security. When I manually create a new connection for this 2nd custom connector, I'm prompted for the token and everything runs as expected. So I've now added both custom connectors to my canvas Power App, get the token I need from first custom connector in the app's OnStart event. I'd now like to try to change the connector properties for the second custom connector to use the dynamically generated token.
Is this possible? If not, is there another approach I should be pursuing?
Tks
So it turns out this is possible via policies. Haven't gotten it working yet, but hopefully soon...

How to prevent cloud scheduler from triggering a function more than once?

I'm triggering a cloud function every minute with cloud scheduler [* * * * *].
The Stackdriver logs indicate the function appears to have been triggered and run twice in the same minute. Is this possible?
PubSub promises at least once delivery but I assumed that GCP would automatically handle duplicate triggers for scheduler -> function workflows.
What is a good pattern for preventing this function from running more than once per minute?
Your function needs to be made "idempotent" in order to ensure that a message gets processed only once. In other words, you'll have to maintain state somewhere (maybe a database) that a message was processed successfully, and check that state to make sure a message doesn't get processed twice.
All non-HTTP type Cloud Functions provide a unique event ID in the context parameter provided to the function invocation. If you see a repeat event ID, that means your function is being invoked again for the same message, for whatever reason.
This need for idempotence is not unique to pubsub or cloud scheduler. It's a concern for all non-HTTP type background functions.
A full discussion on writing idempotent functions is a bit too much a Stack Overflow answer, but there is a post in the Google Cloud blog that covers the issue pretty well.
See also: Cloud functions and Firebase Firestore with Idempotency

how to check if Azure Function is still running or not

I have a situation where i have to call an Azure function periodically.
When i call the function, i need to check the state of the azure function.
If the Azure function is running, then i need to postpone the call until it is completed.
I am trying to look in an Email Queue (as the emails are coming in), I need to send the email using Amazon SES
I am using a HTTPtrigger and the email part is working fine.
I don't want the function to be called, when it is already running.
If you consider the serverless architecture, each time whenever you invoke a service endpoint, a new instance will be created and scaling is managed by scaling controller.
There is no way to check if the function is running or not.
Without understanding more about your use-case, I think this is possible with Durable Functions. Look up Eternal Orchestrations that call themselves on an interval indefinitely. You can then query the status if required and have a workflow in the eternal orchestration that changes depending on certain criteria.

ClojureScript Exponent - Get Data From an API

What is the best way of getting information from an API?
In ClojureScript you can use Ajax GET requests to connect to an API.
For my exponent app, I want to have a button that when pressed, it connects to a website, say Google (doesn't have to be, just an example), and then simply returns the data.
I also need authentication for these requests so how would I add that as well?
In react native you can use fetch, how would I use this in ClojureScript?
Any help would be much appreciated. Thanks.
Actually it's really super simple.
(js/fetch your-arguments)
Just remember that you can get acces to js functions via js/. And don't forget to pass js data-structures to function. I'm talking about clj->js function.

Is it possible to capture an outgoing http call from an ActionScript (Flex) module?

I'm trying to develop a test framework for some ActionScript code we're developing (Flex 3.5). What's happening is this:
As part of a Web Analytics function we are calling a track method in a class, providing the relevant information as part of the call. This method is provided in a library (SWC), and we have no access to the code.
Ultimately the track method sends an outgoing http request to the tracking server. We can see this quite happily in HttpFox.
I was hoping to be able to capture this outgoing request and interrogate it in my test class, allowing us to a) run tests in a more standalone fashion, and b) programmatically determine that the correct information is being tracked.
No problem just run this developer tool that displays all requests leaving your machine.
http://www.charlesproxy.com/
Unless you're going to use a sniffing tool, which probably would be hard to use for a programmatic evaluation, I would recommend using a proxy to channel your request. You could let the track method send the request to a php script on the proxy server, have it evaluate the request content, and then forward it to the actual tracking server. I suppose on a tracking system, you won't need to worry about the response, so it shouldn't be too hard to implement.
You could run a web server on a localhost (or any really) and just make sure the DNS entry the code is trying to access points to the server you are running.