EWS request not responding with error on Mac - exchangewebservices

In our Outlook add-in, we are using makeEwsRequestAsync to get the current email's MimeContent. We understand there is a 1MB request/response limit when using EWS via the JavaScript API. When we reach this limit on Windows, we at least see this message:
"Response exceeds 1 MB size limit. Please modify your EWS request".
However, when making this request for a large email (>1MB) on Mac (Outlook 2016), we don't get any sort of response whatsoever. The add-in just seems to hang. Is there any way we can catch this error on Mac? We would like to show a dialog or something notifying the user that there was a size limit error, but we can never actually catch the error.
I found someone with a similar question, but no one has answered.

Related

CAS too many redirects and 500 internal server error

I am using Apereo CAS 6.3.3 generated by CAS Overlay project.
The integration with the application and LDAP is working well, but there are two problems that I have noticed. These problems are random and happens only with 20% of the requests.
1.) If someone directly accesses the CAS log-in page with ?execution=anything , the page shows the following HTTP 500 error.
org.springframework.webflow.execution.repository.BadlyFormattedFlowExecutionKeyException: Badly formatted flow execution key 'anything', the expected format is '_'
Is there anyway the error can be disabled and instead the page is redirected to application log-in page instead.
2.) Is there anyway the too many redirects error can be disabled for the expired service tickets so that the end user is redirected to the login page instead of first seeing the error message on the browser?
Thanks.
Is there anyway the error can be disabled and instead the page is redirected to application log-in page instead.
Applications that integrate with an SSO solution such as CAS do/should not have their own log-in page. After all, that's why they use CAS.
That said, to handle this error, you'll need to modify the CAS login webflow, and have it properly catch this error using what Spring Webflow calls Global Exception Handlers. Only then you can decide how to respond and handle the scenario with bad flow execution states.
Is there anyway the too many redirects error can be disabled for the expired service tickets so that the end user is redirected to the login page instead of first seeing the error message on the browser?
Yes. There is.
You need to get the application to respond correctly to failed validation attempts. If it sees a validation failure due to an expired ticket, the application should honor the failure, and ask for a new non-expired service ticket.
You may also need to adjust the service ticket timeout; perhaps there is lag or delay such that the ticket is seen as expired by the time it reaches the application, and is sent back to CAS for validation.
The best way to stop an infinite redirect loop is to stop the entity that is causing or sending those, and correct the mistake rather than hiding it with an error message. That's just an aspirin, and while it helps, it does not treat the underlying problem.
to the login page instead of first seeing the error message on the browser?
There is no login page, or the browser to redirect to. The failure is the result of a back-channel validation call. There is no browser.

What is Google app script URLFetch service response size limit?

I'm writing a Google App Script for a Google spreadsheet and I'm facing a problem with the URLFetch service.
I'm requesting an external service and I sometimes receive an empty response. The external service is pretty stable and must return something, at least an error message if something wrong happens.
But I sometimes receive an empty response, just nothing.
I can only solve this by modifying the request so the expected response should be less in size, and this always fix the issue. Which makes me think its a response size limitation.
I doubt its a random problem because rerunning the script again to issue the same request always fails, unless, as I said, I modify the request to receive a response less in size.
But on Google's quota page, I can't find a clear answer to my question.
At the time of asking this question, I'm facing a problem of reading a response that is supposed to be around 14.1 KB. I knew the size by running the request manually on my browser.
Does anyone know if there is a limitation and what exactly is it ?
In my experience the limit is 10MB. It is definitely larger than 14.1KB, an application I developed (http://www.blinkreports.com) routinely receives responses in excess of 1MB.
Under the assumption that the same limits apply for UrlFetch in Google Apps script as in App Engine.. these limits apply :
request size 10 megabytes
response size 32 megabytes
see https://cloud.google.com/appengine/docs/java/urlfetch/#Java_Quotas_and_limits

Google Drive download limit / throttle on individual file downloadUrls?

I'm seeing a 403 "Access to the webpage was denied" error on one specific file being accessed via the Drive SDK. It was working earlier, the app permissions are set correctly, and we're having success with other files using different tokens against the same app.
We're getting the downloadUrl from the SDK successfully, then seeing the error message only after users are redirected to the downloadUrl. Because of that it's hard to track, but we've confirmed that it's working for some, but not for others — it hasn't fully stopped.
The full error text is:
Access to the webpage was denied
You are not authorized to access the webpage at [...] You may need to sign in.
HTTP Error 403 (Forbidden): The server refused to fulfill the request.
We're including the GET download and (valid) access_token parameters, all that.
My question is this: could this be related to the reported Google Drive outage that's currently happening, or is there some sort of throttle/limit to access of a single file over the drive API? I've never seen this behavior before, and this response isn't listed among the standard 403 responses.
I have just seen something similar. I was using a freshly acquired access token, so I don't think it's oauth related. My working theory is that the downloadUrl link was stale. When I got fresh meta data, which had a different value in downloadUrl, it worked using the same access token that had previously failed.
This is only a theory since it isn't documented anywhere, and I would actually expect 410 (or even 301) as a much more appropriate status than 403.

Authentication token expiring within 10 seconds instead of 10 minutes

We currently have an implementation that uses boxes API. Our authentication process follows the process outlined here:
http://developers.box.com/get-started/#authenticating
Sometime in the recent past this has stopped working. When we go to the oauth URL (for example, https://www.box.net/api/1.0/auth/rev37d850p6pixlemm5ok8doxj2g77kg), it will initially show the login credentials page, but faster than a user could reasonably enter their credentials the page starts returning "expired ticket". If I immediately go to the token's page after creating it I can reload the page a few times before it goes into the "expired ticket" state. This is clearly not consistent with the expected 10 minute expiry time stated in the documentation.
We've had this authentication working correctly up to now, so it seems like something has changed.
We are investigating. More news once we have some additional information.
New info>> We've identified the bug, and will be pushing a fix this afternoon.
The fix has been rolled out. Please let us know if you are still experiencing any problems with SSO.
Our Android app has the same problem. As far as I investigated it, using get_auth_token API causes the ticket to expire. So you have to make sure the user has successfully logged in BEFORE attempting to get the authentication token (which is not the case with Box SDK for Android). But I don't see a viable way to check whether the user has logged in.

GMail and POP3 RETR problem - switch to IMAP?

When I'm accessing GMail inbox using POP3 protocol, it seems that after fetching given email using RETR command, after QUIT-ting and reconnecting, previously RETR-ieved email is not listed anymore when calling LIST.
Then, after going to: GMail settings//Forwarding and POP/IMAP and setting "Enable POP for all mail (even mail that's already been downloaded)", on next login all emails are being LIST-ed again, but if I RETR any of them, it again disappears from LIST after re-logging..
I can then go to GMail settings again and repeat the whole process, but it's a show-stopper for me as I'm writing a script that should work without any manual actions.
Am I missing something, or only IMAP can help here?
(EDIT: RFC http://www.ietf.org/rfc/rfc1939.txt doesn't say a word about RETR command deleting messages)
This is intended behaviour of Gmail. According to this question, "[a]ll messages may be downloaded to another computer once; after downloading mail, it will not download again."
There's also a 'recent mode', in which the last 30 days of mail are fetched, regardless of whether it's been sent to another POP client already.
That said, don't try to fetch all your mail by different computer in a short period of time, as Gmail may block your account for 24 hours.
I strongly suggest using IMAP.
Gmail’s POP3 configuration maybe sometimes confusing. You can find Gmail POP3 behavior here.
Switching to IMAP is very good solution.
This is a common problem, unfortunately, it does not always have the easiest solution. Hopefully, this information will help you and others arrive at the best implementation that suits your needs. Disclaimer: if you have the option or capability to add IMAP to your pop3, it certainly makes things more manageable.
Gmail has its own Pop3 implementation, and with that said, not all of this is relevant to other pop3 providers
Here is the lifecycle of the issue and some information that can help you manage it:
You connect to the pop3 server either in NORMAL mode or RECENT mode. This puts the "session" on the pop server into a "transaction state".
Recent mode is used by prefixing the username on connection with "recent:" + Username. Recent mode will return the last 30 days of email on the server. Note* this will supersede the UIDL command which I will touch on below. I.e. recent mode will return all 30 days worth of email if they have not been removed. Since it always returns the last 30 days, if you have multiple clients, they will all receive the same information in recent mode.
Normal mode is the default. Normal mode will respect the limitations of the commands you choose to use. UIDL will return a chunk of roughly 250 of the oldest emails on the server. If you have 500 emails on the server, and you do not remove any, UIDL will return the id and Unique Identifier for those first 250 emails regardless, so you may not be aware of the new 250. The caveat here is as follows, GMAIL has an option on the web console where you configure pop, to "Enable pop from now on". By selecting that and saving, the timestamp at that moment will be used by the pop server to "refresh" the oldest time. Therefore UIDL will start returning messages back you from that point on up until you reach the 250 mark again (assuming you have not removed them).
It is important to note that the transaction state exists until you issue the QUIT command. Upon issuing that command the server enters the "Update" state, where it will begin issuing the updates you requested, like DELETE commands, or popping them after they have been downloaded. Until QUIT is issued successfully, nothing will be deleted and the server state does not change.
STAT command will show you the number of emails in the pop3 stack, that are on your server.
RETR command will retrieve, or download the email, but it is not marked as downloaded until you successfully end the session
UIDL which many developers use to retrieve the message numbers and unique identifiers is very useful if you maintain the state of the server and pop the email. UIDL will only ever return the oldest 250-ish (I have seen 251-255) emails. If you are constantly polling for new email, this is dangerous if email hasn't been removed. ALSO! if you need to delete email, make sure the GMAIL setting to, Keep a copy in my inbox, is configured in the web console, so that you have access to those emails as a backup.
LIST command would solve your problem in normal mode for getting more than 250 emails back, (note: you still need to maintain an id file locally to cross-check incoming mail in order to know that it is new or old)... HOWEVER: this command also returns mail from the SENT box, which for many, is not a viable solution.
Hints:
If you are managing the inbox quickly and effectively and do not believe 250 to be a limiting factor in your process, UIDL and RETR will work.
If you will not be able to keep your inbox below 250 but also need access to new email, AND you do not expect the inbox to grow to outrageous size and the performance is not concerning, RECENT mode should work.