Let's say I fire up Puppeteer (headless Chrome) in a Cloud Function, and Puppeteer navigates to a public website and that website does something computationally intensive.
The computation is occurring on the "client side," but it's being visited by a cloud function -- so who incurs the compute time?
You pay CPU charges for the Cloud Function as long as it's active, so between the moment it starts and the moment it finishes (or the return value resolves). It doesn't really matter whether it's computing something itself, or waiting for another process to return something.
Related
I have deployed a cloud function that runs perfectly fine most of the time, but sometimes outputs this generic error.
The function generates PDF documents - sometimes generates them from HTML with Puppeteer (I think this part pretty much always works), sometimes combines other PDFs from invoking itself and loading other URLs into multi-page documents. I can very well imagine that it hits some kind of limit when those documents get long and complex - but I have set both the memory limit and the execution time limit as high as the service allows, and it still fails. Looking at the monitoring graphs, it seems neither execution time nor memory usage graphs are hitting the limits. So the question is: how can I figure out why it fails?
I moved the function to run on "2nd generation" of the cloud functions feature, where it was possible to grant it more memory. This fixed the issues and it now runs reliably. I still do not quite understand why the graphs under "monitoring" did not hit the indicated memory limits when it failed - would have made it immediately obvious what the problem was. Anyway, 2nd generation FTW.
I post this as an answer just to let anyone struggling with this problem know that the memory usage graphs may not tell the full story, and your cloud function may require more memory even if it does not seem to hit the roof.
When publishing a large amount of events to a topic (where the retry and time to live is in the minutes) many fail to get delivered to subscribed functions. Does anyone know of any settings, or approaches to ensure scaling react quickly without dropping them all?
I am creating a Azure Function app that essentially passes events to an event grid topic at high rate, and other functions subscribed to a topic will handle the events. These events are meant to be short lived and not persist longer than a specified set of minutes. Ideally I want to see the app scale to handle the load without dropping events. the overall goal is that each event will trigger an outbound api endpoint call to my own api to test performance/load.
I have reviewed documentation on MSDN, and other locations but not much fits my scenario (most talk in terms of incoming events and not outbound http events).
For scaling I have looked into host.json settings for http (as there is none for grid events and grid events look to be similar to http triggers) and setting those seemed to have made some improvements
The end result I expect is that for every publish to a topic endpoint it gets delivered to a function and executed with a low fail delivery/drop rate.
What I am seeing is that when publishing many events to a topic (and at a consistent rate), the majority of events get dead-lettered/dropped
Consumption plan is limited by the computing power that is assigned to your function. In essence that there are some limits up to which it can scale, and then it becomes the bottle neck.
I suggest to have a look at the limitations.
And here you can some insights about computing power differences.
If you want to enable automatic scaling, or scaling in the number of vm instances I suggest using an app service plan. The cheapest option where scaling is supported is Standard pricing tier.
I have an Azure function triggered via service bus which basically just does a long await (1 - 4.5 minutes (managed with cancellation token to prevent function timeout and host restarting)).
I want to process as many of these long await messages as I can. (Ideally about 1200 at the same time..)
First I ran my function on an App Service Plan (with Concurrent Calls = 1200), but I think each trigger creates a thread, and 1200 threads causes some issues.
So I decided to run it on Consumption, with Batch Size 32, with the idea what I can avoid creating tons of threads and scale out the consumption function instead when it sees the queue build up.
Unfortunately exactly the opposite happens, the Consumption function will process 32 messages, but never scales out even though the queue has 1000+ items in it. Even worse, some times the function just goes to sleep although there are still many items in the queue.
I feel my best option would be to group work in a message, so instead of 1 message = 1 long await, 1 message could be 100 awaits for example, but my architecture doesn't really allow to me group messages easily (because if some tasks fail but some succeed this is easily managed with dead letters, but with grouping I need to maintain state to track this). Is there an existing way to efficiently have many (independent) long running awaits on an azure function, either consumption or service plan.
I'm developing a dapp and got it working well using web3 and testrpc.
My frontend is currently pretty "chatty" with contract calls (constant methods) and everything works super fast.
I was wondering what kind of latency I should expect in the real network for simple calls? do I need to aggresively optimize my contract reads?
It depends. If your dApp is running on a node (and is fully synced), then constant functions will execute similar to what you're seeing in your testing. If not, then all bets are off. Your latency will depend on the provider you're connecting to.
My best advice is once you finish development, deploy to testnet and run performance tests. Chances are if you're not running a fully synced local node, and your app is as chatty as you say, then you may be disappointed with the results. You would want to look into optimizing your reads, moving some state data out of the contract (is possible), or turning your client into a light node.
I am facing the follow problem :
- There is a calculation which is calculated complex maths during the loading of the application, and it is toking considerable long time ( about 20 seconds ) on which time the CPU is used nearly on 100% and the application look like it is frozen.
Since it is a mobile application, this must be prevented, even with the costs of extending the initial loading time, but there is not direct access to the calculating code since it is inside 3rd party library.
Is there a way to prevent AIR application most of CPU generally ?
On desktop, you would use the Workers API. Its pretty new, I'd recommend it for AS3 only projects. If you use flex, its better to wait a few months.
Workers is a multi-threading API, what allows you to make a UI and a Working thread. This will still use 100% of CPU, but UI won't stuck.. Here are some links to get you started:
Thibault Imbert - sneak peek,
Intro to as 3 workers,
AS3 Workers livedocs
However, on Mobile, you can't use workers, so you'd have to break your function apart, and insert some delays there, like callLater, or setTimeout. Its hard to compose a function like that, but if it has a loop, you can insert a callLater method after every x iteration. you can parametrize both x, and the delay of callLater function to achieve perfect solution. After callLater is called, the UI will be rendered, events will be generated and catched. If you don't need them, remove their listeners, or stop their propagation with a higher priority handler. If you need, I can post some source example of callLater in a loop.