Is the pingdom uptime monitor increasing my GA bounce rate - html

I've been looking at my Google analytics account and noticed a surge in the bounce rate. Is it possible the PINGDOM uptime monitor is causing this increase? There seems to be a correlation between the two.
Many thanks

That is very likely since Pingdom's uptime monitor makes a HTTP(S) call to page(s) on your website to verify that they're up. You can avoid this simply by excluding their IP addresses from being logged as visits by Google Analytics by adding them to GA's exclusion list, as documented in the Google Analytics help pages.

I think it cant increase your bounce rate. cause analytics filtering bots, but if you sure that pingdom run analytics.js and was not filtering by default, you can add they domain/ip in custom filter (in view).

As a first step, you could check, if increased bounce rate can be credited to any traffic sources. This might already be linked to Pingdom as a referrer. You should also check with Pingdom FAQ or Support, whether they run your scripts during their measurements. E.g. this product or product feature of theirs claims to run javascripts.
If scripts are executed during measurement, then Analytics scripts are also very likely to executed, and therefore a call will be made to GA servers for your site. In this case, solutions mentioned in other responses can be used, e.g. filterin based on traffic source's domain or IP.
If scripts are not run, then you'll have to look for other reasons behind. Again, traffic source base breakdown of bounce rate could be a good place to start.

Related

Bulk email download from outlook365

I am trying to download/access on daily basis, all the emails exchanged over outlook365 by employees of an organization, who obviously uses outlook365. After download finishes I'll be running some background jobs on these emails.
I've option of doing this via EWS APIs, but the throttling policies are turning out to be pain and affecting the predictability of the system, because of throttling policies. Daily no of emails to be accessed could range from 0.1- 1 million or above.
I am exploring upcoming graph as of now, to see if it helps solving this. I also have another way out by routing these emails to lets say AWS SES or apache james and accessing/downloading from there, thus avoiding throttling all together. But I am trying to avoid additional servers in deployment as of now.
My question -
Has anybody experienced this issue and what was if at all any reliable way around while using outlook supported email APIs?
I've option of doing this via EWS APIs, but the throttling policies are turning out to be pain and affecting the predictability of the system, because of throttling policies. Daily no of emails to be accessed could range from 0.1- 1 million or above.
Inefficiency code is more likely the cause of the throttling then blaming the API (eg if your not using batching, requesting more properties then you need etc) so the first thing you should do make sure you optimize the code as all the client based API's are throttling similarly. 0.1- 1 million over the time-span of day isn't that many emails to process in my experience with EWS especially if your using Impersonation where throttling cost would be dispersed across the mailboxes your accessing.

Can I stop google billing me if I reach my free api limit

Apparently google now require you to give them billing details for using there google maps on your web site. If I understand it correctly you get $200 free allowance and after that they start charging you.
Is there a way to say to google, don’t charge me after the free $200 and just stop displaying the map?
There is no way to do that.
The only 2 things available now is to:
Based on your monthly usage, calculate approximately your daily usage (per API) and set daily limits. You can do so by going to the API Console, select an API, navigate to the Quotas tab, and edit the daily or per-second quotas. You can use this Calculator.
Set billing budgets and alarms.
To control your spend, you can set billing budgets and alarms so that you are notified when your usage reaches a given budget. Here’s how.
Be noticed that these alarms are only "an alarm based on a budget", they won't stop the usage from your project.
I asked about this in the Cloud support, and they told me this:
You can use Programmatic Budget Notifications in order to perform
custom actions when reaching spend thresholds. For instance, you can
disable billing on your project when reaching the free tier limit.
https://cloud.google.com/billing/docs/how-to/notify
Note this will disable the billing completely and can even cause your Cloud projects to be deleted!
See the warning:
This example removes billing from your project, shutting down all resources. Resources might not shut down gracefully, and might be irretrievably deleted. There is no graceful recovery if you disable billing. You can re-enable billing, but there is no guarantee of service recovery and manual configuration is required.
Some things may be outside your control. Google support has confirmed to me that their own bot hits count towards billable maps API usage. So they decide the level of spidering, and then charge for it.
I believe this is called the "Fish in a Barrel" business model

Any known limits to calls per minute for webapps?

Im getting inconsistent behaviours when sending POST requests to a google apps script deployed as webapp.
I have a desktop app sending POST calls to a GAS webapp. This calls may be totally variable in their cadence, from 1 in several minutes, to bursts of dozens per second.
In my tests I have found requests seemingly lost, requests that don't progress along the webapp internal logic flow (like script instances that get cut or interrupeted (?)), while others work flawlessly. There is no evident pattern.
However, trying things around, I found that if I space the calls, adding a pause between requests, everything normalizes.
Are there stablished and known limits for this? The only option I have to solve this is to introduce this artificial intervals between calls? I have not found information on this in the GAS quotas page.
Any help and ideas would be appreciated.
Confirming in the answer: there is no evident or documented per-minute limit on the number of requests to a GAS webapp.
The issue I'm experimenting is related to concurrency. Even when they are coming from the same source, fast paced requests can produce concurrency issues when accesing storage services like Cache or Properties.
This should be handled using the Lock Service.

Salesforce: Google maps query status 620 G_GEO_TOO_MANY_QUERIES

In Salesforce I have created a future method that makes a Google Maps geocode callout. I know there is a limit of 2,500 requests per day but our instance has made no more than 100 requests today. Maybe the same number of requests yesterday. Yet the response code is 620 G_GEO_TOO_MANY_QUERIES.
Could it be that Google is seeing the IP address of the instance of Salesforce and aggregating all of these requests as coming from one location. So other companies that are sharing the address are causing my instance to hit this limit?
If not can anyone suggest another cause?
This discussion suggests that it is the shared origin from salesforce that is messing it up.
This only makes sense though if you are doing the geocode lookup from the server and not from a client. If you would do it fromt the client it would use the ip from the client and you are dealing with the local clients lookup limits (see below).
If you are doing it on the server you might also have to check if you are actually doing something that is legal and not breaking the ToS from google. In this discussion you will get some background on that and also a solution if you need to fix this on the server (buy a licence)
To be complete the G_GEO_TOO_MANY_QUERIES can mean one of 2 things:
You exceeded the daily limit (too many in a day)
You exceeded the speed limit (too many request is too short period)
Google has not specified an exact limit as far as I know but they don't seem to enjoy automated lookups. If you look at various libraries and plugins you'll see that all of them force a delay between each request and often they add a little randomness to the delay. You could experiment with this if it makes any difference
Did this end up working for you? we're hitting a server-side 620 and all configuration looks a.o.k... we have a premier license and upgraded to 250k requests per day.

Logging viewing time on website

Is there a way to log how long a visitor stays on my website?
Write some JavaScript ping function to send heartbeat requests every few seconds.
That is, if you wish to do it manually. Otherwise, use some statistics software. Many hosters put something for you to use. Or just add Google Analytics to your site.
Following Developer Art's suggestion, there is a very good implementation of this heartbeat method at ajaxpatterns.org.
Some services/tools: Google Analytics, Piwik, Woopra, ...
Google Analytics will record the time between pageviews on your site. This means that single pageview sessions (bounces) are not factored into time on site, nor is the time spent a user spends on their final pageview for multi-page visits. It'll give you a decent ballpark figure though.