Audio problems in Chrome - html

I'm trying to develop an HTML5 game. To this end I have the PHP built-in server running on my local machine to work on, then I "compile" it (combine/minify JS files, minify CSS, etc) and upload them to the production server.
I've run into an issue with the sound effects. For some reason, Chrome won't recognise that it has reached the end of the sound, and therefore will get stuck in a "buffering" state. This is in spite of a valid Content-Length header. IE and Firefox don't seem to have this problem. It also affects the loop attribute, and it also fails to fire ended events.
These problems are only present on the PHP built-in server. It works fine in production. What am I missing?
EDIT: Here's an example request/response on the PHP server:
GET /bin/aud/rightwhereiwantyou.mp3 HTTP/1.1
Host: 10.0.0.110:8000
Connection: keep-alive
Accept-Encoding: identity;q=1, *;q=0
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31
Accept: */*
Referer: http://10.0.0.110:8000/snake.php
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Range: bytes=0-
HTTP/1.1 200 OK
Host: 10.0.0.110:8000
Connection: close
X-Powered-By: PHP/5.4.13
Content-Type: audio/mpeg
Content-Length: 1685058

My wild guess is that Chrome is accepting a partial content response (a byte range), since the MP3 file could be large and you might want to be streaming it instead.
Have you tried adding the Accept-Ranges and Content-Range headers to your response? See the answer to this question for an example.

This appears to be a known bug with Google Chrome.

Related

Why does Chrome 58.0.3029.81 (64 Bit) cause ViewExpiredException on log in?

On Thursday (2017-04-26), I began seeing the following error when I logged into my application using my Authenticator JSF page.
[#|2017-04-30T15:18:51.649-0500|WARNING|glassfish
4.1|javax.enterprise.web|_ThreadID=30;_ThreadName=http-listener-1(2);_TimeMillis=1493583531649;_LevelValue=
StandardWrapperValve[Faces Servlet]: Servlet.service() for servlet
Faces Servlet threw exception
javax.faces.application.ViewExpiredException:
viewId:/security/Authenticator.xhtml - View
/security/Authenticator.xhtml could not be restored. at
com.sun.faces.lifecycle.RestoreViewPhase.execute(RestoreViewPhase.java:212)
at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101)
at
com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:123)
at
com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:198)
My Authenicator.xhtml page is backed by a Authenticator.java class with the following header.
#Named
#ViewScoped
public class Authenticator implements Serializable {
During my research, I discovered the following:
I am able to log into my application using Chrome 58.0.3029.81 one time after restarting the computer running the GlassFish 4.1.2 server. If I log off, I will get the above error on every future log in attempt. (This is a weird one.)
I can log in using Internet Explorer
I can log in using Chrome versions older the 58.0.3029.81.
I can log in using Chrome 57.0.2987.132 on my Android telephone
I can log in using Chrome 58.0.3029.81 if I change the javax.faces.STATE_SAVING_METHOD variable in my web.xml file from server to client.
Why would Chrome 58.0.3029.81 kill the Authenticator view resulting in the ViewExpiredException?
As requested, I analyzed the network traffic and determined that Chrome 58.0.3029.81 sends two additional Get requests during the Authenticator.xhtml display process than Chrome 57.0.2987.133 sends.
Chrome 57:
GET /webapp/security/Authenticator.xhtml HTTP/1.1
GET /webapp/security/RES_NOT_FOUND HTTP/1.1
GET /webapp/security/RES_NOT_FOUND HTTP/1.1
POST /webapp/security/Authenticator.xhtml HTTP/1.1
Chrome 58:
GET /webapp/security/Authenticator.xhtml HTTP/1.1
GET /webapp/security/RES_NOT_FOUND HTTP/1.1
GET /webapp/security/RES_NOT_FOUND HTTP/1.1
GET /webapp/security/RES_NOT_FOUND HTTP/1.1
GET /webapp/security/RES_NOT_FOUND HTTP/1.1
POST /webapp/security/Authenticator.xhtml HTTP/1.1
Since I don't know why Chrome sends the RES_NOT_FOUND gets in the first place I don't know if sending two extra is a bad thing but it seems to be related to GlassFish 4.1.2 not being able to reconnect to the Authenticator view.
Could this be an issue with my Authenticator.xhtml page or is it a Chrome 58/GlassFish 4.1.2 issue?
The following is a comparison of the Post information:
Chrome 57 Post
POST /webapp/security/Authenticator.xhtml HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Content-Length: 205
Cache-Control: max-age=0
Origin: http://localhost:8081
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: http://localhost:8081/webapp/security/Authenticator.xhtml
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8
Cookie: JSESSIONID=4067aa3d0df7f2bc26b8200a8c4a;
modena_expandeditems=j_idt32%3Awelcome-menu
authentication-form=authentication-form&authentication-form%3AuserName=XXX&authentication-form%3Apassword=XXX&authentication-form%3Aj_idt93=&javax.faces.ViewState=-4577625721740212982%3A4298605796688550126
Chrome 58 Post
POST /webapp/security/Authenticator.xhtml HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Content-Length: 204
Cache-Control: max-age=0
Origin: http://172.24.1.125:8081
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.81 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: http://172.24.1.125:8081/webapp/security/Authenticator.xhtml
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8
Cookie: JSESSIONID=4089ef02f0bca32d331de1f5404f
authentication-form=authentication-form&authentication-form%3AuserName=XXX&authentication-form%3Apassword=XXX&authentication-form%3Aj_idt93=&javax.faces.ViewState=3383766421781608154%3A6418504070036764787
The only difference that I see is that Chrome 57 appended "; modena_expandeditems=j_idt32%3Awelcome-menu" after the JSESSIONID.
This turned out to be an issue with version 2.1.1 of the PrimeFaces premium theme called Modena and PrimeFaces 6. During HTTP analysis, I noticed that Chrome 57 sent 2 RES_NOT_FOUND requests and Chrome 58 sent 4 RES_NOT_FOUND requests. This was a known issue with Modena 2.1.1 as documented in the following PrimeFaces Modena Forum issue:
PrimeFaces Modena Forum Issue
During each RES_NOT_FOUND request, the JSESSIONID would change and something about the additional 2 changes in Chrome 58 would break the link between JSESSION and ViewState.
Upgrading Modena to version 2.1.3 eliminated all the RES_NOT_FOUND requests and resolved the ViewExpired issue.

Chrome - Confirm Form Resubmission - Different behaviors

In a commercial web application that uses POST requests frequently, it was necessary to perform history.go(-1) to trigger back navigation. As may others has experienced, I am getting -
Confirm Form Resubmission ERR_CACHE_MISS
error in Chrome. However, same works just fine in Firefox (this is no a duplicate, please read on).
It is true that using POST requests to render content is the reason leading to above problem (without using Post/Redirect/Get design pattern).
However, while looking for an alternative solution, it was observed that in certain web sites / apps it is possible to navigate back in Chrome (cache hit) where as in some sites it fails. I have inspected all the HTTP headers from successful sites and it looks like HTTP headers are not making any difference.
Could it possibly be the browser acting differently based on the SSL certificate used by the web application or what could be the reason for back navigation working in certain web sites?
Example working web applications :
http://gmail.com/ - Enter some random email. Click next. Enter incorrect password multiple times and use browser back button to navigate back.
https://support.hostgator.com/ - Enter some random text in search box (do this several times). Use browser back button to navigate back.
POST request used in failing web application :
POST /post3.jsp HTTP/1.1
Host: 192.168.1.111
Connection: keep-alive
Content-Length: 18
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Origin: https://192.168.1.111
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.99 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Referer: https://192.168.1.111/post2.jsp
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8
Identified that in Chrome back navigation fails in above work flow, unless SSL Certificate used in HTTPS communication is a valid, trusted certificate.
If you are using a self signed certificate, add the CA certificate as a Trusted Root Certification Authorities and everything should work as expected.

What does a double forward slash mean in a url aside from the protocol seperator at the begining?

I have seen a lot of urls lately that seem to have hijacked a portion of the site - 'www.example.com' In the example url: "http://www.example.com//wp-includes/js/apparently_user_defined_dir/from_here_on/index.html" 'wp-includes' is wordpress, and js javascript - what has been done (typically) to these sites and what is to be done? (aside from notifying example.com or their host..)
Thank you.
Apart from the domain the other are "path"
The link you entered is translated by browser to a HTTP request with the header (example):
GET //wp-includes/js/apparently_user_defined_dir/from_here_on/index.html HTTP/1.1
Host: www.example.com
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2024.2 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: el-GR,el;q=0.8
Check this line:
GET //wp-includes/js/apparently_user_defined_dir/from_here_on/index.html HTTP/1.1
Its the server's job to reply to the request, so if the server knows that
//wp-includes/js/apparently_user_defined_dir/from_here_on/index.html
means its ok.
Most times this is translated to a path on a folder, but probably not in this case.
The page you entered returns a Status Code: 404 Not Found so ... your requested page was not found and it responds you with this error page ... which for some reason reports to the user no error. (We all know this is an example page.)

Why does Browser still sends request for cache-control public with max-age?

I have Amazon S3 objects, and for each object, I have set
Cache-Control: public, max-age=3600000
That is roughly 41 days.
And I have Amazon CloudFront Distribution set with Minimum TTL also with 3600000.
This is the first request after clearing cache.
GET /1.0.8/web-atoms.js HTTP/1.1
Host: d3bhjcyci8s9i2.cloudfront.net
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
And Response is
HTTP/1.1 200 OK
Content-Type: application/x-javascript
Content-Length: 226802
Connection: keep-alive
Date: Wed, 28 Aug 2013 10:37:38 GMT
Cache-Control: public, max-age=3600000
Last-Modified: Wed, 28 Aug 2013 10:36:42 GMT
ETag: "124752e0d85461a16e76fbdef2e84fb9"
Accept-Ranges: bytes
Server: AmazonS3
Age: 342557
Via: 1.0 6eb330235ca3971f6142a5f789cbc988.cloudfront.net (CloudFront)
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: 92Q2uDA4KizhPk4TludKpwP6Q6uEaKRV0ls9P_TIr11c8GQpTuSfhw==
Even while Amazon clearly sends Cache-Control, Chrome still makes second request instead of reading it from Cache.
GET /1.0.8/web-atoms.js HTTP/1.1
Host: d3bhjcyci8s9i2.cloudfront.net
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
If-None-Match: "124752e0d85461a16e76fbdef2e84fb9"
If-Modified-Since: Wed, 28 Aug 2013 10:36:42 GMT
Question:
Why does chrome makes second request?
Expires
This behavior changes when I put an explicit Expires attribute in headers. Browser will not send subsequent request for Expires header, but for cache-control public, it does send it. My all S3 objects will never change, they are immutable, when we change file, we put them as new object with new URL.
In Page Script Reference
Chrome makes subsequent requests only sometimes, I did this test by actually typing URL in browser. When script is referenced by HTML page, for few subsequent requests chrome loads cached scripts, but once again after sometime, once in a while it does send request to server. There is no Disk Size issue here, Chrome has sufficient cache space.
Problem is we get charged for every request, and I want S3 objects to be cached forever, and should be loaded from Cache and should never connect to server back.
When you press F5 in Chrome, it will always send requests to the server. These will be made with the Cache-Control:max-age=0 header. The server will usually respond with a 304 (Not Changed) status code.
When you press Ctrl+F5 or Shift+F5, the same requests are performed, but with the Cache-Control:no-cache header, thus forcing the server to send an uncached version, usually with a 200 (OK) status code.
If you want to make sure that you're utilizing the local browser cache, simply press Enter in the address bar.
If the HTTP Response contains the etag entry, the conditional request will always be made. ETag is a cache validator tag. The client will always send the etag to the server to see if the element has been modified.
If Chrome Developer Tools are open (F12), Chrome usually disables caching.
It is controllable in the Developer Tools settings - the Gear icon to the right of the dev-tools top bar.
If you are hitting the refresh button for loading the particular page or resource, the if-modified-since header request is sent everytime, if you instead request the page/resource as a separate request in a new tab or via a link in a script or html page, it will load the page/resource from the browser cache itself.
This is what has happened in my case, may be this is the general universal case. I am not completely sure, but this is what I gathered via my digging.
Chrome adds Cache-control: max-age=0 header when you use self-signed certificate. Switching from HTTPS to HTTP will remove this header.
Firefox doesn't add this header.

Google Chrome issuing Head Request to random domains

Today I was investigating on something with Fiddler, when I noticed that, when I launch Google Chrome, I have always 3 HEAD requests to some domains which seem to be randomly chosen.
Here is a sample :
HEAD http://fkgrekxzgo/ HTTP/1.1
Host: fkgrekxzgo
Proxy-Connection: keep-alive
Content-Length: 0
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31
Accept-Encoding: gzip,deflate,sdch
Do you have any idea of why Google Chrome behaves this way ?
Thanks guys
This is Chrome checking to see if your ISP converts non-resolving domains into your.isp.com/search?q=randomnonresolvingdomain
See https://mikewest.org/2012/02/chrome-connects-to-three-random-domains-at-startup
This algorithm seems unusable with forward proxy servers. Browser definitely asks for random page and proxy definitely returns some page -- error (50x), masked error (50x or 40x) or nice "you are lost" page with HTTP code 200 .