Cannot play wav file on safari - html

I'm trying to play a wav file on safari. Pretty much the same question as this: playing a WAV file on iOS Safari
But the accepted answers aren't working. I'm using a rails server with apache and phusion passenger. The audio file plays fine on chrome but not on any safari (desktop, mobile, and through uiwebview).
I'm sending the file in rails with
send_file filename, :type => "audio/x-wav", :disposition => "inline"
From the other stack overflow q&a, I tried adding Content-Range and Content-Length headers to the response
size = File.size(filename)
response.header["Content-Range"] = "bytes 0-#{size-1}/#{size}"
response.header["Content-Length"] = "#{size}"
The error i'm receiving is pretty nondescript "Failed to load resource: Plug-in handled load"
Here are the headers of the response
x-runtime 0.589797
Date Wed, 17 Aug 2016 16:38:49 GMT
X-Content-Type-Options nosniff
Server Apache/2.2.15 (CentOS)
X-Powered-By Phusion Passenger 5.0.7
Status 200 OK
Content-Type audio/x-wav
content-range bytes 0-3243/3244
Cache-Control private
Content-Transfer-Encoding binary
Content-Disposition inline; filename="eng-182-msg0026.wav"
Connection close
Content-Length 3244
X-XSS-Protection 1; mode=block
x-runtime 0.589797

When requesting a direct link to a media file this would work. But I was requesting this through a controller. Safari was making multiple http requests each for a certain range. The server was responding with the entire file every time which is why it fails.
The annoying thing is that the Safari web inspector decides not to show the "Range" header in the network request. It'll show other http headers but not the most important one in this case...
Anyways, short answer, respond back with the the correct byte range that each request wants.

I had the same problem, playing audio files on the Safari didn't work, but on the Chrome and every other browser everything worked as expected.
Set proper Content-Range in response header solve my problem. I used forked gem send_file_with_range, which contain fix for the Rails 5.1.
I recommend this gem because code in this gem is pretty simple and short, and this solution seems like the simplest and quickest.
Note: this solution is specific for the Ruby on Rails developers, but general solution is to set proper Content-Range in the response header, for every request.
I hope that this will be helpful for somebody.

Related

PayaraMicro doesn't support http range header?

My webapp running on Payara-Micro is a tool to listen to audio files and navigate freely through them using the javascript currentTime property.
So the browser has <audio src="..."> tags and to get the audio file, it sends http GET request to the server with the header Range: bytes=0-
Unfortunately Payara in response doesn't returns 206 code and Content-range: bytes 0-881403 but it returns 200 and this has the effect that when I use currentTime=10 for exemple, the currentTime becomes equal to 0!
Previously this app was running in PHP with an apache server and apache was supporting the range header.
Is it possible to configure PayaraMicro or Grizzly to support range request ? Or If I put an Apache server in front of PayaraMicro it will work ?
Thank you for your help!
Payara supports RFC-7233. I did a mistake: there was a filter in my code that removed the default response header for media files.
Here is the devil filter:
public void doFilter(...
(...)
httpServletResponse.setHeader("Content-Type", "text/html; charset=UTF-8");
filterChain.doFilter(servletRequest,servletResponse);
}
Subject closed!

Chrome is not sending if-none-match

I'm trying to do requests to my REST API, I have no problems with Firefox, but in Chrome I can't get the browser to work, always throws 200 OK, because no if-none-match (or similar) header is sent to the server.
With Firefox I get 304 perfectly.
I think I miss something, I tried with Cache-Control: max-age=10 to test but nothing.
One reason Chrome may not send If-None-Match is when the response includes an "HTTP/1.0" instead of an "HTTP/1.1" status line. Some servers, such as Django's development server, send an older header (probably because they do not support keep-alive) and when they do so, ETags don't work in Chrome.
In the "Response Headers" section, click "view source" instead of the parsed version. The first line will probably read something like HTTP/1.1 200 OK — if it says HTTP/1.0 200 OK Chrome seems to ignore any ETag header and won't use it the next load of this resource.
There may be other reasons too (e.g. make sure your ETag header value is sent inside quotes), but in my case I eliminated all other variables and this is the one that mattered.
UPDATE: looking at your screenshots, it seems this is exactly the case (HTTP/1.0 server from Python) for you too!
Assuming you are using Django, put the following hack in your local settings file, otherwise you'll have to add an actual HTTP/1.1 proxy in between you and the ./manage.py runserver daemon. This workaround monkey patches the key WSGI class used internally by Django to make it send a more useful status line:
# HACK: without HTTP/1.1, Chrome ignores certain cache headers during development!
# see https://stackoverflow.com/a/28033770/179583 for a bit more discussion.
from wsgiref import simple_server
simple_server.ServerHandler.http_version = "1.1"
Also check that caching is not disabled in the browser, as is often done when developing a web site so you always see the latest content.
I had a similar problem in Chrome, I was using http://localhost:9000 for development (which didn't use If-None-Match).
By switching to http://127.0.0.1:9000 Chrome1 automatically started sending the If-None-Match header in requests again.
Additionally - ensure Devtools > Network > Disable Cache [ ] is unchecked.
1 I can't find anywhere this is documented - I'm assuming Chrome was responsible for this logic.
Chrome is not sending the appropriate headers (If-Modified-Since and If-None-Match) because the cache control is not set, forcing the default (which is what you're experiencing). Read more about the cache options here: https://developer.mozilla.org/en-US/docs/Web/API/Request/cache.
You can get the wished behaviour on the server by setting the Cache-Control: no-cache header; or on the browser/client through the Request.cache = 'no-cache' option.
Chrome was not sending 'If-None-Match' header for me either. I didn't have any cache-control headers. I closed the browser, opened it again and it started sending 'If-None-Match' header as expected. So restarting your browser is one more option to check if you have this kind of problem.

Why does my browser sometimes not recognize my server's header?

If my browser tries to do an HTTP GET on a file that isn't on my server, I just write
HTTP/1.0 404 Not Found\r\n\r\n
to the port. In my browser's web console, I can see that sometimes the error is recognized, but sometimes it's not. Instead the text will be displayed in the browser window. Why does this happen?
If I use curl the response always looks the same, so why does it do this in the browser?
Seems that most web browsers expect the Content-Length: header in all responses. Even if you are not replying anything; which is pretty funny because the only mandatory header is Host: only in requests and only for HTTP 1.1.
I tested with Firefox and I had to write down the following to make it work:
HTTP/1.0 404 Not Found\r\n
Content-Length: 0\r\n\r\n
And then the browser should close the connection. If you use libcurl, it is probably inserting this and other headers too, like Date: for instance.

Issue with downloading PDF from S3 on Chrome

I'm facing an issue on downloading PDF files from Amazon S3 using Chrome.
When I click a link, my controller redirect the request to the file's URL on S3.
It works perfectly with Firefox, but nothing happens with Chrome.
Yet, if I perform a right click -> Save location as will download the file ...
And even a copy-paste of the S3 URL into Chrome will lead to a blank screen ...
Here is some information returned by curl:
Date: Wed, 01 Feb 2012 15:34:09 GMT
Last-Modified: Wed, 01 Feb 2012 04:45:24 GMT
Accept-Ranges: bytes
Content-Type: application/x-pdf
Content-Length: 50024
Server: AmazonS3
My guesses are related to an issue with the content type ... but all I tried didn't work.
The canonical Internet media type for a PDF document is actually application/pdf as defined in The application/pdf Media Type (RFC 3778) - please note that application/x-pdf, while commonly encountered and listed as a media type in Portable Document Format as well, is notably absent from the official Application Media Types listed by the Internet Assigned Numbers Authority (IANA).
I'm not aware of why and when application/x-pdf came to life, but apparently the Chrome PDF Plugin does not open application/x-pdf documents as of today.
Consequently you should be able to trigger a different behavior in Chrome by changing the media type of the stored objects accordingly.
Alternative (for authenticated requests)
Another approach would be to Force a PDF to download instead of letting Chrome attempt to open it, which can be done by means of triggering the Content-Diposition: attachment header with your GET request - please see the S3 documentation for GET Object on how to achieve this via the response-content-disposition request parameter, specifically response-content-disposition=attachment as demonstrated there in section Sample Request with Parameters Altering Response Header Values.
This is only available for authenticated requests though, see section Request Parameters:
Note
You must sign the request, either using an Authorization header
or a Pre-signed URL, when using these parameters. They can not be used
with an unsigned (anonymous) request.
There is an html based solution to this. Since chrome is up to date with HTML5, we can use the shiny new download attribute!
Broken
Works

how to get around "Content-encoding gzip deflate" header sent by Chrome?

We have a simple HTML login form on our embedded device's web server. The web server is custom coded because of severe memory limitations. Regardless of these limitations, we like Chrome and would like to support it.
All browsers post an HTTP Request to our login form containing the expected "username=myname&password=mypass" string, but not Chrome. Instead we receive from Chrome a "Content-encoding gzip deflate" request. BTW, by "all browsers", I mean this was tested to work fine on Internet Explorer versions 9 beta, 8, 7, 6 ; Firefox versions 4 beta, 3, 2 ; Opera 10, 9 ; Safari 5, 4, 3 ; and SeaMonkey 2.
Referring to section "14.2 Accept Charset" of the w3.org's http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html we tried sending back a HTTP 406 code to indicate that this server does not support that encoding in the hope that Chrome would try again and post the expected strings the standard way. The 406 code returned by the web server is clearly displayed in Chrome's "Inspect Element" window, but it seems to be treated by Chrome as an error code, and no further requests are sent to the web server. "Login failed." We also tried HTTP return codes 405 and 200, same result.
Is there a way to get around this behavior either with client-side JavaScript that will prevent Chrome from sending the "Content-encoding gzip deflate" request, or with a server-side response that will explain nicely to Chrome we don't do gzip, just send it to us the regular way?
We tried posting to the Google Chrome Troubleshooting forum with no response.
Any help would be greatly appreciated!
Best regards,
Bert
You're looking in the wrong section for the error code: Section 14.11 of RFC 2616 specifies that you send a 415 (Unsupported Media Type) if you can't deal with the Content-Encoding.
It sounds like when using chrome to do a post to a server the first time, chrome defaults to using a gzip encoding. Pretty strange.
Easy way out is to just place your username/pass as GET parameters, and when sending the response, as long as you don't send gzip content encoding, chrome should start using none-gzipped posts from that point on. Hope that works?
I tested this out a bit with a simple Python script that printed to stdout. I thought I was getting the same problem, but then I realized that I was just forgetting to flush stdout. It seems that Chrome always sends the request up to the end of the headers before sending the request content, and you have to use a second recv call to get the POST data. In contrast, the entire Firefox request is returned in a single recv call.