We have an MP4 video on our site; it plays fine in IE9+, Firefox, Chrome, and Chrome on mac. However, on Safari, the video doesn't play at all - it does trigger a "stalled" event and then nothing loads. I would post our HTML, but I traced the problem further by finding that Safari wouldn't play it even when navigating to the original MP4's URL. When downloaded and played locally, the video works fine in Quicktime.
The weirdest part of this is that of all our developers, I can get the video to work on Safari when I run the related server from my development computer. What's more, other MP4 files in the same directory have a similar problem. This has been the key to me, and I've been searching for any little difference in the way the videos transfer from the server - request/response headers, exact filesize, etc.
Headers copied from Chrome (only since Safari is harder to copy/paste from)
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding:gzip, deflate, sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
DNT:1
Host:*************:8443
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36
Response Headers
Accept-Ranges:bytes
Content-Length:44875102
Content-Type:video/mp4;charset=UTF-8
Date:Tue, 30 Dec 2014 21:11:51 GMT
ETag:W/"44875102-1419959755000"
Last-Modified:Tue, 30 Dec 2014 17:15:55 GMT
Server:Apache-Coyote/1.1
(Also, just in case this reminds you of an older issue; I'm aware Safari on Windows has been dead for ages. This issue is occurring on OS X)
EDIT: New info that might help a bit. I took a personal video from my own webserver, which was able to work from there on the problematic Safari browsers in question, and downloaded it to our server's local video directory. From there, it encounters the same issue as our other videos. This suggests to me that the MP4 itself may not matter - this is probably a server issue of some sort with our Tomcat 7 webserver. We do have the Content-Types registered correctly, which at least covers the basics, but I am curious if there are other necessary parts.
MORE INFO: I didn't think to mention this initially, but we are loading our webpages and videos over an HTTPS connection. Most of our test servers do not have valid certificates, and so we need to click through the standard browser warning that "This server might not be who it says". We are now looking into what it would take to have correct certificates on all our servers.
Safari requires webserver to support "Range" request header in order to play your media content.
https://developer.apple.com/library/safari/documentation/AppleApplications/Reference/SafariWebContent/CreatingVideoforSafarioniPhone/CreatingVideoforSafarioniPhone.html#//apple_ref/doc/uid/TP40006514-SW6
For a legit "Range" request response, your webserve need to return status code "206".
I had a similar problem with audio. The solution was to add the source tag, to the audio tag. Can you try in your case the following:
<video loop controls='true' width='100%' height='100%'>
<source src='//some_video.mp4' type='video/mp4'>
</video>
I uploaded a new MP4 file, but it played in SAFARI only (on both my MAC and my iPhone), not Chrome, Oasis, Firefox, or Brave. HTML code was identical to previous successes. File size and Dimensions were fine. But the Codecs on the old, working files were "H.264, AAC". The Codecs on the new, not working files were "MPEG-4, AAC". I edit my video files on VideoPad. So I looked at the specification selections on the "Export file as" options, and, sure enough, the Codecs was defaulted to MPEG-4. I selected H.264 and exported the file. Uploaded to AWS and made public. Retried my new files in the four failure browsers and BINGO!, they all worked. There is a God!
Make sure controls='true' type='video/mp4' is given in your html code.
<video loop controls='true' width='100%' height='100%' src='//some_video.mp4' type='video/mp4'></video>
This could indeed be an issue of missing byte-range support, depending on the version you are using. It was added to the DMSDownloadServlet in MAGNOLIA-3855 (Magnolia fix version 4.4.6).
Just ran into the same issue. All headers, range, etc. were correct. However, I had a poorly constructed service worker. All other browsers handled the failure, Safari did not. Temporarily removed the service worker, and things are back to normal.
...
On a side note, does charset make any sense on the video/mp4 type at all? Try removing the charset on it.
EDIT: Yes, charset might be the problem, see: Specify content-type for documents uploaded in Magnolia
EDIT2: Not charset, woops, reading comprehension fail. Might be byte range?
To quote: "[...] we found out that Safari/iOS "uses HTTP byte-ranges for requesting audio and video files." Now we guess that the Magnolia DMS file serving doesn't support this feature, and hence the streaming fails."
What happens if you add these to your .htaccess?
AddType video/ogg .ogv
AddType video/mp4 .mp4
AddType video/webm .webm
Recently, my team ran into a particular issue that resulted in the same behavior. We were using Apache 2.4 and noticed that if we had an authentication layer such as .htpasswd enabled, Safari would not display videos at all even after authenticating. It's almost as if it does not continue to honor the initial authentication clearance for certain types of subsequent HTTP requests.
Sorry I don't have anything more technical to provide, but it's something to check for anyone experiencing video issues only in Safari.
I ran into the same problem and solved it but no other answer here is not involved to mine, so I'll remain the solution here for someone following.
I've been making my own video streaming server, which, in the questioned case, simply returns a "Ranged" mp4 file, and I found Safari does not play video carried in HTTP response lacking of "Connection" response header for some reason.
Please, forgive me if you already solve this issue!
I've had the same problem with my server videos in Safari. I was abled to solve this using POSTMAN/INSOMNIA for check the headers that my server is sending. Chrome may can trick your, once that in this browser the video works fine!
If the video is not ranged(full video request) your server must return status(200) and check it out if the 'Accept-Ranges:bytes' is sent from your server.
Header sample status 200:
Server: nginx
Date: Wed, 25 Jul 2018 17:34:18 GMT
Content-Type: video/mp4
Content-Length: 22995782
Connection: keep-alive
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, PATCH, DELETE
Access-Control-Allow-Headers: X-Requested-With,content-type
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
if the video is ranged your server must return status(206) with range headers correctly.
Header sample status 206:
Server: nginx
Date: Wed, 25 Jul 2018 18:13:07 GMT
Content-Type: video/mp4
Content-Length: 1023
Connection: keep-alive
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, PATCH, DELETE
Access-Control-Allow-Headers: X-Requested-With,content-type
Access-Control-Allow-Credentials: true
Accept-Ranges: bytes
Content-Range: bytes 1-1023/22995782
I hope this help you! my best regards,
Paulo Durço
Safari and iPhone require the "Range" request header to play your media content.
you have to handle Range on the server-side.
if (request.getHeader("Range") != null) {
System.out.println("Inside range ");
System.out.println("range value "+request.getHeader("range"));
// String fileLocation = melpUploadFiles.getFilethumbPath();
resfilename=melpUploadFiles.getFilename();
response.setStatus(206);
String rangeValue = request.getHeader("range").trim().substring("bytes=".length());
File fileloc= new File(melpUploadFiles.getFilePath());
long fileLength = fileloc.length();
long start, end;
if (rangeValue.startsWith("-")) {
end = fileLength - 1;
start = fileLength - 1 - Long.parseLong(rangeValue.substring("-".length()));
} else {
String[] range = rangeValue.split("-");
start = Long.parseLong(range[0]);
end = range.length > 1 ? Long.parseLong(range[1]) : fileLength - 1;
}
if (end > fileLength - 1) {
end = fileLength - 1;
}
if (start <= end) {
System.out.println("inside response block");
long contentLength = end - start + 1;
response.setHeader("Content-Length", contentLength + "");
response.setHeader("Content-Range", "bytes " + start + "-" + end + "/" + fileLength);
response.setHeader("Content-Type", "video/mp4");
response.setHeader("Accept-Ranges","bytes");
response.setHeader("ETag","\"a226e70476837efa4df4b4bfd75366c4\"");
response.setHeader("Server", "Apache");
response.setHeader("Last-Modified",System.currentTimeMillis()+"");
response.setDateHeader("Expires", System.currentTimeMillis() + 604800000L);
// response.setHeader("Content-Disposition", "inline; filename="+resfilename+"");
RandomAccessFile raf = new RandomAccessFile(fileloc, "r");
raf.seek(start);
output = response.getOutputStream();
byte[] buffer = new byte[2096];
int bytesRead = 0;
long totalRead = 0;
System.out.println("content length "+contentLength);
while(totalRead<contentLength) {
bytesRead = raf.read(buffer);
totalRead += bytesRead;
output.write(buffer, 0, bytesRead);
}
}
}
else
{
for other browser
}
In my case, I needed to remove default attribute from track tag:
<track default kind='captions' />
In case someone else have this problem.
Related
Im working on a .NetCore MVC project.
As the title suggests my goal is to store a cookie that will eventually be accessible through an iframe.
In order to achieve that this is what I did -
Startup.cs -
app.UseCookiePolicy(new CookiePolicyOptions
{
MinimumSameSitePolicy = SameSiteMode.None
});
Using the actual CookieOption class -
public void SetCookie(string key, string value, int? expireTime, HttpResponse Response)
{
CookieOptions option = new CookieOptions();
//allow cross-site cookies for iframes
option.SameSite = SameSiteMode.None;
if (expireTime.HasValue)
option.Expires = DateTime.Now.AddMinutes(expireTime.Value);
else
option.Expires = DateTime.Now.AddMilliseconds(10);
Response.Cookies.Append(key, value, option);
}
Doing the above It seems like it doesn't always work as intended.
Iv'e tested lots of browsers both desktop and mobile.
Just found out that sometimes the cookie is stored successfully like so -
Send for:
Any kind of connection
Accessible to script:
Yes
And sometimes on the same exact chrome version just a different computer its stored like so -
Send for:
Same-site connections only
Accessible to script:
Yes
Which basically means it won't be accessible using iframes.
The problem isn't a specific computer issue as I was managed to duplicate the problem on 3 different computers running the same chrome version which works fine on other computers.
The example above was produced using this chrome version (last version):
Version 80.0.3987.149 (Official Build) (64-bit)
Anyone have an idea how can I overcome that ? gotta make sure cookies will always be accessible using an iframe.
Thanks!
Edit - Attempt with Secure and HttpOnly flag
So iv'e adjusted my code to set the HttpOnly and the Secure flags to true.
The computers that usually worked fine had this cookie settings -
Send for
Secure connections only
Accessible to script
No (HttpOnly)
And it works fine with iframe.
The computer which didn't work before had this cookie settings -
Send for
Secure same-site connections only
Accessible to script
No (HttpOnly)
Which obviously didn't work with an iframe...
Just updating of another approach that didn't work.
Edit 2 - Using fiddler to intercept the cookies response:
So using fiddler to read the cookie this is what it looks like -
Set-Cookie: __cfduid={randomvaluehere}; expires=Fri, 24-Apr-20 17:37:48 GMT; path=/; domain=.domain.com; HttpOnly; SameSite=Lax
Set-Cookie: mycookie=mycookievalue; expires=Fri, 24 Apr 2020 17:37:49 GMT; path=/; secure; httponly
So seems like the response is storing a cookie with is SameSite=Lax on the apex of the domain, which I don't care about.
I work on a sub-domain which is the second set-cookie that is shown above.
Looks like SameSite=None isn't explicitly presented, should it? if so why wouldn't it seeing the code above?
Also reminding you that exactly that works fine for other browsers or other computers with the same chrome version.
The sample above is exactly the same on computer where it worked and in one that it wasn't.
If I understand correctly, the flow for using ETags works as described now:
Browser sends the request to the server. Server sends back the image with an ETag
Browser saves the ressource along with the ETag
On the next request, the browser sends the request with the header If-None-Match containing the saved ETag.
When returning a response, chrome dev tools tells me these are my headers
Cache-Control:max-age=7200
Connection:keep-alive
Content-Type:image/png
Date:Thu, 27 Apr 2017 13:42:57 GMT
ETag:"b36f59c868d4678033d318a182658e18371df8f5"
Expires:Thu, 27 Apr 2017 15:42:57 GMT
Server:nginx
Transfer-Encoding:chunked
X-Debug-Token:873c8f
X-Debug-Token-Link:http://localhost:8081/_profiler/873c8f
Now, when I reload the page, the new image isn't gathered, though. It's saved through Chrome's in-memory cache or disk cache as you can see here
But why is this happening? I sent an ETag so why does the browser not make another request to the server but instead uses it's own cache?
The reason I'm asking is, that we want to cache our images, but as soon as they change, they should be updated immedietely. Why does Chrome do that?
Update
I just noticed that it works as intendent on Firefox, so this seems to be a chrome "feature" and not a configuration one.
Update 2
After setting my new headers for image like this
Cache-Control:max-age=0, private
Connection:keep-alive
Content-Type:image/png
Date:Thu, 27 Apr 2017 14:44:57 GMT
ETag:"e5b18bdebe44ed4bba3acb6584d9e6a81692ee27"
Expires:Fri, 27 Oct 2017 14:44:57 GMT
Server:nginx
Transfer-Encoding:chunked
X-Debug-Token:3447a6
X-Debug-Token-Link:http://localhost:8081/_profiler/3447a6
Chrome still uses the disk cache to laod the data. This is my nginx now
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
access_log off;
add_header Cache-Control "max-age: 0, must-revalidate";
}
Update 3
I just did some further research. As soon as the Expires tag is set, Chrome uses the in-memory or disk-cache. Same with max-age. Which I don't understand, even when must-revalidate is set, as soon as Expires or max-age=>0 is set, Chrome doesn't reload the ressource.
The server is telling chrome that the resource is good for the next 2 hours (7200 seconds). Presumably your second request was sooner than that.
You would be better served with max-age: 0 or perhaps max-age: 0, must-revalidate. Then while you'll never get a fully-cached operation (not even bothering to hit the server) you can still have the server send 304 Not Modified responses to tell the browser that it can use the cached entity (and update any metadata based on headers if applicable) so while you still have a request-response happening only around 300bytes will be sent rather than however many kilobytes or more the entity is.
For anyone else who might land here, note that Chrome does not cache anything if there are any SSL errors (such as if you're using a self-signed certificate).
Original post that clued me in: https://stackoverflow.com/a/55101722/9536265
Chrome bug: https://bugs.chromium.org/p/chromium/issues/detail?id=110649 (and it appears they will never fix it, which seems ridiculous since almost all developers will be developing in that very situation)
I have not been able to confirm with documentation, but the behavior appears to be the same with Edge Chromium. Firefox, on the other hand, will happily follow standard cache practices for sites using "Not secure" certificates such as those with imperfectly-matching site names or self-signed certificates. I have not tested Safari.
For ETags to be used and the browser to send the If-Modified-Since and If-None-Match headers, the cache control must be set to no-cache.
Either by the server using the Cache-Control: no-cache header or by the browser through the Request.cache = 'no-cache' option.
Read more about the cache options here: https://developer.mozilla.org/en-US/docs/Web/API/Request/cache.
This is an old post but this is how we solved it.
#Musterknabe Comment on your update 3:
The same thing happened with us, even after setting
must-revalidate chrome was not reloading fresh resources. I found out that since resources were already present in clients/browser cache memory they were being served from memory cache and the new request(to get static resources) was not firing resulting. So response headers were not updated with must-revalidate
To fix this problem we used two steps:
1. Changed resource file names - To make sure new request will be fired
2. Added cache-control headers to static files(In startup.cs) - To take care of future static resource file changes. So that going forward we don't have to change the resource file names.
public void Configure(IApplicationBuilder app)
{
app.UseStaticFiles(new StaticFileOptions
{
OnPrepareResponse = ctx =>
{
const int durationInSeconds = 0;
ctx.Context.Response.Headers[HeaderNames.CacheControl] =
"must-revalidate,max-age=" + durationInSeconds;
}
});
}
Hope it helps.
After updating to Chrome 40.0.2214.111, variably when I visit certain Google related sites (like http://youtube.com and get presented with an ad before the video), the browser downloads a file named f.txt.
I do not have any adblock plugins installed.
f.txt contains a few lines of JavaScript...starting with:
if (!window.mraid) {document.write('\x3cdiv class="GoogleActiveViewClass" ' +'id="DfaVisibilityIdentifier_3851468350"\x3e');}document.write('\x3ca target\x3d\x22_blank\x22 href\x3d\x22https://adclick.g.doubleclick.net/pcs/click?xai\x3dAKAOjsvDhmmoi2r124JkMyiBGALWfUlTX-zFA1gEdFeZDgdS3JKiEDPl3iIYGtj9Tv2yTJtASqD6S-yqbuNQH5u6fXm4rThyCZ0plv9SXM-UPKJgH4KSS08c97Eim4i45ewgN9OoG3E_
In looking up the issue on Google, others have experienced the same, but I have not found any resolution or understanding of why this is happening. I assume it is a content-disposition related bug with some of the JS files loaded on the page, and will clear up in a future patch.
Wondering if anybody else had experienced / insight.
This issue appears to be causing ongoing consternation, so I will attempt to give a clearer answer than the previously posted answers, which only contain partial hints as to what's happening.
Some time around the summer of 2014, IT Security Engineer Michele Spagnuolo (apparently employed at Google Zurich) developed a proof-of-concept exploit and supporting tool called Rosetta Flash that demonstrated a way for hackers to run malicious Flash SWF files from a remote domain in a manner which tricks browsers into thinking it came from the same domain the user was currently browsing. This allows bypassing of the "same-origin policy" and can permit hackers a variety of exploits. You can read the details here: https://miki.it/blog/2014/7/8/abusing-jsonp-with-rosetta-flash/
Known affected browsers: Chrome, IE
Possibly unaffected browsers: Firefox
Adobe has released at least 5 different fixes over the past year while trying to comprehensively fix this vulnerability, but various major websites also introduced their own fixes earlier on in order to prevent mass vulnerability to their userbases. Among the sites to do so: Google, Youtube, Facebook, Github, and others. One component of the ad-hoc mitigation implemented by these website owners was to force the HTTP Header Content-Disposition: attachment; filename=f.txt on the returns from JSONP endpoints. This has the annoyance of causing the browser to automatically download a file called f.txt that you didn't request—but it is far better than your browser automatically running a possibly malicious Flash file.
In conclusion, the websites you were visiting when this file spontaneously downloaded are not bad or malicious, but some domain serving content on their pages (usually ads) had content with this exploit inside it. Note that this issue will be random and intermittent in nature because even visiting the same pages consecutively will often produce different ad content. For example, the advertisement domain ad.doubleclick.net probably serves out hundreds of thousands of different ads and only a small percentage likely contain malicious content. This is why various users online are confused thinking they fixed the issue or somehow affected it by uninstalling this program or running that scan, when in fact it is all unrelated. The f.txt download just means you were protected from a recent potential attack with this exploit and you should have no reason to believe you were compromised in any way.
The only way I'm aware that you could stop this f.txt file from being downloaded again in the future would be to block the most common domains that appear to be serving this exploit. I've put a short list below of some of the ones implicated in various posts. If you wanted to block these domains from touching your computer, you could add them to your firewall or alternatively you could use the HOSTS file technique described in the second section of this link: http://www.chromefans.org/chrome-tutorial/how-to-block-a-website-in-google-chrome.htm
Short list of domains you could block (by no means a comprehensive list). Most of these are highly associated with adware and malware:
ad.doubleclick.net
adclick.g.doubleclick.net
secure-us.imrworldwide.com
d.turn.com
ad.turn.com
secure.insightexpressai.com
core.insightexpressai.com
I experienced the same issue, same version of Chrome though it's unrelated to the issue. With the developer console I captured an instance of the request that spawned this, and it is an API call served by ad.doubleclick.net. Specifically, this resource returns a response with Content-Disposition: attachment; filename="f.txt".
The URL I happened to capture was https://ad.doubleclick.net/adj/N7412.226578.VEVO/B8463950.115078190;sz=300x60...
Per curl:
$ curl -I 'https://ad.doubleclick.net/adj/N7412.226578.VEVO/B8463950.115078190;sz=300x60;click=https://2975c.v.fwmrm.net/ad/l/1?s=b035&n=10613%3B40185%3B375600%3B383270&t=1424475157058697012&f=&r=40185&adid=9201685&reid=3674011&arid=0&auid=&cn=defaultClick&et=c&_cc=&tpos=&sr=0&cr=;ord=435266097?'
HTTP/1.1 200 OK
P3P: policyref="https://googleads.g.doubleclick.net/pagead/gcn_p3p_.xml", CP="CURa ADMa DEVa TAIo PSAo PSDo OUR IND UNI PUR INT DEM STA PRE COM NAV OTC NOI DSP COR"
Date: Fri, 20 Feb 2015 23:35:38 GMT
Pragma: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Cache-Control: no-cache, must-revalidate
Content-Type: text/javascript; charset=ISO-8859-1
X-Content-Type-Options: nosniff
Content-Disposition: attachment; filename="f.txt"
Server: cafe
X-XSS-Protection: 1; mode=block
Set-Cookie: test_cookie=CheckForPermission; expires=Fri, 20-Feb-2015 23:50:38 GMT; path=/; domain=.doubleclick.net
Alternate-Protocol: 443:quic,p=0.08
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding
FYI, after reading this thread, I took a look at my installed programs and found that somehow, shortly after upgrading to Windows 10 (possibly/probably? unrelated), an ASK search app was installed as well as a Chrome extension (Windows was kind enough to remind to check that). Since removing, I have not have the f.txt issue.
This can occur on android too not just computers. Was browsing using Kiwi when the site I was on began to endlessly redirect so I cut net access to close it out and noticed my phone had DL'd something f.txt in my downloaded files.
Deleted it and didn't open.
Seems related to https://groups.google.com/forum/#!msg/google-caja-discuss/ite6K5c8mqs/Ayqw72XJ9G8J.
The so-called "Rosetta Flash" vulnerability is that allowing arbitrary
yet identifier-like text at the beginning of a JSONP response is
sufficient for it to be interpreted as a Flash file executing in that
origin. See for more information:
http://miki.it/blog/2014/7/8/abusing-jsonp-with-rosetta-flash/
JSONP responses from the proxy servlet now:
* are prefixed with "/**/", which still allows them to execute as JSONP
but removes requester control over the first bytes of the response.
* have the response header Content-Disposition: attachment.
A simple HTML code:
<img src="http://someaddr/image.php">
image.php is a script that returns a random Redirect to a static image with all necessary no-cache headers:
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Location: http://someaddr/random_image_12345.jpg
The problem: when navigating back and forward to this HTML page, Chrome (latest win/mac) does not revalidate address http://someaddr/image.php.
I have tried using redirects 302 and also 303 (which in RFC has stronger requirement that it should NEVER been cached by browser). This works like a charm in IE, FireFox, Opera. They always refresh http://someaddr/image.php. But Chrome doesn't.
I have even used Developer Tools in Chrome, and it seems that in Network Log it even don't shows any attempt (cached or not) of fetching http://someaddr/image.php. Network Log shows only one connection already to http://someaddr/random_image_12345.jpg (cached). Why this is so broken...
I know the naive/simple solution of putting query string in image source:
<img src="http://someaddr/image.php?refresh={any random number or timestamp}">
But I don't like/can't use hacks like that. Are there ANY other options?
Try a 307 redirect
But if you're stuck trying to get to a link that won't work due to a cached redirect...
This doesn't clear the cache but it's one quick and possible route around it if you're pulling your hair out trying to get to a link that has been redirect cached.
Copy the link address into the address bar and add some GET information to the address.
EXAMPLE
If your site is http://example.com
Put a ?x=y at the end of it
( example.com?x=y ) - the x and y could be anything you want.
If there is already a ? in the url with some info after it
( example.com?this=that&true=t ) - try to add &x=y to the end of it...
( example.com?this=that&true=t&x=y )
From a link posted in another question:
The first header Cache-Control: must-revalidate means that browser must send validation request every time even if there is already cache entry exists for this object.
Browser receives the content and stores it in the cache along with the last modified value.
Next time browser will send additional header:
If-Modified-Since: 15 Sep 2008 17:43:00 GMT
This header means that browser has cache entry that was last changed 17:43.
Then server will compare the time with last modified time of actual content and if it was changed server will send the whole updated object along with new Last-Modified value.
If there were no changes since the previous request then there will be short empty-body answer:
HTTP/1.x 304 Not Modified
You can use HTTP's etags and last modified dates to ensure that you're not sending the browser data it already has cached.
$last_modified_time = filemtime($file);
$etag = md5_file($file);
header("Last-Modified: ".gmdate("D, d M Y H:i:s", $last_modified_time)." GMT");
header("Etag: $etag");
if (#strtotime($_SERVER['HTTP_IF_MODIFIED_SINCE']) == $last_modified_time ||
trim($_SERVER['HTTP_IF_NONE_MATCH']) == $etag) {
header("HTTP/1.1 304 Not Modified");
exit;
}
I am making a flash that calls google translate Text-to-speech service through the url:
translate.google.com/translate_tts?tl=en&q=example
I got it to work in firefox, but for some reason it does NOT work in chrome and safari. Where could be the problem?
the error I get is:
[IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032: Stream Error. URL: http://translate.google.com/translate_tts?tl=en&q=example"]
but when i copy/paste the URL in the browser, it returns a fil just like it should.
Flash players:
firefox: 10,0,42,34 installed - WORKS
chrome: 11,1,102,55 installed - DOES NOT WORK
safari: 10,0,42,34 installed - DOES NOT WORK
I am completely stunned. Don't know how to debug further.
Please help
UPDATE 1: FLASH CODE
public function say(text:String, language:String):void {
var urlString:String = createGoogleTTSUrl(text, language);
var url:URLRequest = new URLRequest(urlString);
//var context:SoundLoaderContext = new SoundLoaderContext(1000, true);
_sound = new Sound();
_sound.addEventListener(Event.COMPLETE, loadComplete);
_sound.addEventListener(ErrorEvent.ERROR, err);
_sound.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler2);
_soundChannel = new SoundChannel();
_sound.load(url); //, context);
}
private function ioErrorHandler2(event:IOErrorEvent):void {
trace(event);
}
I only later removed the SoundLoaderContext, but that didn't change anything.
UPDATE 2: Other people with same problem:
This tutorial has the same issue. Works in FF, but not in Chrome or Safari. People in comments reporting similar errors
(click the demo button:)
http://active.tutsplus.com/freebies/exclusive/exclusive-freebie-text-to-speech-utility/
The obvious reason for the #2032 error, when sniffing the actual request and response, is that Google is responding with a 404 when called from Flash in Chrome or IE (haven't tested Safari or Opera). But why does it return a 404?
Not a solution, but some troubleshooting - what does Firefox do differently from the others in terms of the request? In the following, ChD = "Chrome directly calling the API with no Flash (which works)"
Accept
FF: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Ch: Accept: */*
IE: Accept: */*
ChD: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
This might be it, but it seems unlikely. The two that work send more than just a wildcard Accept header.
User-Agent
Obviously each browser sends along different User-Agents. Except ChD sends the same as Ch - and the former works, so that isn't it.
Referer
Firefox sends no Referer along. The others send:
Referer: http://activetuts.s3.amazonaws.com/freebies/006_textToSpeech/tutorial/text2speech.swf
ChD obviously sends no Referer either, since I typed in the address manually. So the Referer header might be the problem.
Considering that TTS isn't a public API, but a private endpoint (for Google's own translate service), i.e. an endpoint which you're really not allowed to use, that wouldn't be surprising.
Other
Other than that, and some language acceptance details (+ cookie contents - the same set of cookies are sent - and on my machine, it's actually their own cookies for once - Flash used to have a problem where it sent cookies from IE in the Firefox plugin)... Other than that, the requests are identical, but only Firefox's doesn't result in a 404 on my machine.
FlashPlayer versions
FF: 11.0.1.152 Debug
Ch: 11.1.102.55
IE: 11.0.1.152 Debug
Update: IE also sends the Flash version along: x-flash-version: 11,0,1,152 - but none of the other browsers do, so that's not why they don't work.