ios7 memory issue loading json file via angularjs (Ionic) httpPromise.then - json

I have an odd issue. I an using angularjs (Ionic) to load an external json file via a httpPromise.
All has been working correctly until yesterday when the remote files were moved to another host.
Now the problem is on an iphone4 running ios7 it tries to load the file but can't and crashes out with memory usage issues. inspecting it using xcode it quickly climbs to over 300mb and then crashes. does the same thing on two devices. runs fine on other phone emulators etc.
Now if I host the file on another server it works as expected.
the different response headers are:
the one that fails:
Accept-Ranges bytes
Connection close
Content-Length 721255
Content-Type application/json
Date Thu, 11 Dec 2014 06:04:15 GMT
Last-Modified Thu, 11 Dec 2014 05:12:57 GMT
Server LiteSpeed
Vary User-Agent
Working host:
Accept-Ranges bytes
Connection keep-alive
Content-Encoding gzip
Content-Type application/json
Date Thu, 11 Dec 2014 06:05:01 GMT
Last-Modified Thu, 11 Dec 2014 03:29:48 GMT
Server nginx/1.6.2
Transfer-Encoding chunked
Vary Accept-Encoding,User-Agent
Code used to get json file.
var deferred = $q.defer(),
httpPromise = $http.get('http://someurl.com.au/deals.json');
httpPromise.then(function success(response) {
so after all of that my question is why would the json file not load or return an error of some sort?
But cause such a memory spike?
The only main difference I see between servers is the connection configuration.
working uses Connection keep-alive and the one that fails is closed.
thoughts?
Additionally, I've just tried it on 3g and it works fine but via wireless it doesn't work?

i think you need to focus on the Content-Encoding gzip which is probably what is saving you from memory issues

Related

save and serve bundle.js as gzip version only

I have a react app and save the bundle.js on a CDN (or S3 for this example)
on save I run gzip -9
on upload to CDN / S3 i add headers: Content-Encoding: gzip
now each time a browser / http client will download the bundle it will get:
curl -I https://cdn.example.com/bundle.min.js
HTTP/2 200
content-type: application/javascript
content-length: 3304735
date: Wed, 27 Feb 2019 22:27:19 GMT
last-modified: Wed, 27 Feb 2019 22:26:53 GMT
content-encoding: gzip
accept-ranges: bytes
this works fine if I test this in a browser. my only concern is that now we only save a gzip version of the js bundle and users will get it regardless of sending over the Accept-Encoding: gzip in the request
I cant think of any issues this will cause for browsers but I might be missing something.
Is it a bad practice to "enforce" gzip in the response for the bundle.js file ?
Its several month late, but the answer may still be relevant to someone trying the same thing.
Pre-compression with high compression settings is something that can help with saving few more percentages of bandwidth on static resources. However, forcing a single encoding may create problem for some users. As per a study conducted in 2010, ~15% of users with gzip-capable browsers were not sending an appropriate Accept-Encoding request header. The reason being anti-virus software or intermediaries/proxies striping Accept-Encoding header to force the server to send the content in plain-text form.
If you need a CDN that covers the gap, PageCDN serves exactly the same purpose, but offers superior brotli-11 compression, and falls back to gzip for browsers that do not support brotli. You do not need to connect to S3 or any external storage. Just connect to your website, github, or manually upload files to CDN and configure the compression level for files.

Angular i18n json cache issues using nginx

I'm using angular with i18n translations in json files like de.json and en.json. In my production environment (nginx) I have the problem that these JSON files are cached by the web browser. After an upgrade, Chrome will not download the new version of the current json file even though the date header has changed.
Request Information (Chrome):
Request URL: https://[my-site]/assets/i18n/de.json
Request Method: GET
Status Code: 200 (from disk cache)
Remote Address: X.X.X.X:443
Referrer Policy: no-referrer-when-downgrade
Response Headers:
content-encoding: gzip
content-type: application/json
date: Fri, 15 Feb 2019 09:04:42 GMT
etag: W/"5c62bf4d-2aea"
last-modified: Tue, 12 Feb 2019 12:42:53 GMT
server: nginx/1.14.0 (Ubuntu)
status: 304
Does anyone have experience with this problem and can help me?
Not specifically a fix for Angular/nginx, but a practice I often use is to append a query string parameter to the resource when you load it. For me, this is typically derived from version number of the .js file / application, e.g. using it as a seed for a RNG
So, instead of: <script src="/assets/de.json" />
use
<script src="/assets/de.json?_=12345" />
Bonus points- in your Angular application, you can keep track of what version of the assets that you want to include, meaning that you can release new asset files without having clients immediately update to them if they have them in local cache (although note that new clients will get the new version regardless)

How to debug mod_perl with reverse proxy of mod_proxy?

I'm new to this with a very low background so bear with me. I want to debug my issue of prematurely terminating loading the page from the reverse proxied site as it request something like filename.json file. I noticed in the request header it has Content-Type:application/x-www-form-urlencoded, and the response headers are
$WSEP:
Cache-Control:no-cache
Connection:Keep-Alive
Content-Encoding:gzip
Content-Language:en-US
Content-Length:55
Content-Type:text/html;charset=UTF-8
Date:Sun, 12 Jun 2016 04:54:18 GMT
Expires:Thu, 01 Jan 1970 00:00:00 GMT
Keep-Alive:timeout=5, max=97
Server:Apache
Vary:Accept-Encoding
X-Cnection:Close
but I'm wondering why it's text/html content-type when it's json, so I tried manipulating the response with application/json and even trying to instead of downloading the page from the remote server, I made it download locally from my server /var/www/html by creating dummy json file. I was thinking because this is json file it should be application/json not text/html as Content-Type. But the issue still exists. It just stops when it reached requesting that json file.
Now my question is, how can I make sure that requesting to that json is the culprit? Or more appropriately, how can I debug the mod_perl. I tried adding
use warning;
in the code
PerlWarn On
in the apache config but still I don't see any warning/errors that will guide me to debug.
Can you give me advice on how to do find the fault?
By the way, the site of that json file being requested returns 404 but I assume that the proxy server should ignore it and continue downloading other files.

Why does chrome not request more recent version of cached file?

I have a caching issue. Chrome does not always load newer versions of site assets, most often Javascript files loaded by Require.js. Right now I've been having this problem for over 24 hours with a particular file.
If I load the page with devtools (network tab) open, the offending files typically show a HTTP 200 response, but in the "Size" column it shows "(from cache)". In the Headers details it shows "Provisional headers are shown". Wireshark shows that the file is indeed not requested from the server.
Chrome shows the Last-Modified date of the file as Sat, 06 Dec 2014 01:27:55 GMT, but my below raw request to the server clearly indicates the file has changed much more recently.
If I do a raw request myself I don't see anything in the headers returned by the server that should cause this problem:
GET /js/path/to/file.js HTTP/1.1
Host: static.mydomain.com
User-Agent: Matt
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Vary: Accept-Encoding
Content-Type: application/javascript
Accept-Ranges: bytes
ETag: "4203477418"
Last-Modified: Fri, 16 Jan 2015 18:28:30 GMT
Content-Length: 5704
Date: Fri, 16 Jan 2015 21:05:06 GMT
Server: lighttpd/1.4.33
.... data here ...
The issue has been reported by chrome users on multiple OSes with different versions of chrome, but I do not typically receive reports of caching issues on other browsers. (Right now I'm on "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36")
Edit:
The problem seems to be most offensive with files loaded by Require.js, though I have encountered it with javascript directly referenced in the page as well.
What am I missing here? Why won't chrome check for a new version of the file?
As it turns out, browsers have poorly documented behavior regarding caching when the Cache-Control header is not specified in the server response (well, at least when no caching behavior is specified). In general, it seems that in this case a browser determines how long to cache the item for based on the file's last modified date (if declared in the response), the current date, and ????
See: https://webmasters.stackexchange.com/questions/53942/why-is-this-response-being-cached
Sadly, Google's official page on HTTP caching does not mention what happens if the header is not set: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching
Edit:
I ran across some more specific information about the heuristics used, here: What heuristics do browsers use to cache resources not explicitly set to be cachable?

Chrome freezes when playing videos from Rackspace cloudfiles

Can't seem to get chrome to play videos with html5 video tag when I host them on a Rackspace cloudfiles server.
Works perfectly on the regular hosting, but as soon as I link the video with the rackspace cdn url, Chrome freezes (total freeze, website UI completely blocked - after a while Chrome pops a message saying "The following page has become unresponsive bla bla bla").
The video file is fine as it's same as when I link to the regular hosting.
Did a bit of spying on the requests, and I initially thought the problem was that the webm files were serverd by default as application/octet-stream mime-type. I lodged a ticket to rackspace and they gave me a way to force the mime-type when uploading the file. Did that, and the file is now correctly sent as video/webm.. but Chrome still freezes.
Any idea what could be going wrong here?
EDIT: using iheartvideo, loading the video from rackspace triggers a MEDIA_ERR_SRC_NOT_SUPPORTED. Same video off local web server works totally fine (??)
EDIT 2: Happens on both mac and windows with latest mainstream chrome
EDIT 3: curl -I results:
Rackspace (no worky):
HTTP/1.1 200 OK
Server: nginx/0.7.65
Content-Type: video/webm
Last-Modified: Thu, 24 Feb 2011 23:45:12 GMT
ETag: 7029f83b241aa691859012dfa047e20d
Content-Length: 20173074
Cache-Control: public, max-age=900
Expires: Fri, 25 Feb 2011 01:32:11 GMT
Date: Fri, 25 Feb 2011 01:17:11 GMT
Connection: keep-alive
Web Server (worky)
HTTP/1.1 200 OK
Date: Fri, 25 Feb 2011 01:17:51 GMT
Server: Apache
Last-Modified: Thu, 24 Feb 2011 03:56:26 GMT
ETag: "11a0b47-133d112-49cff32940e80"
Accept-Ranges: bytes
Content-Length: 20173074
Content-Type: text/plain
EDIT 4: For those interested, this is what the rackscape crew told me to do to set a webm content type on a file:
The file browser is not smart enough
to determine the content type
video/webm. Unfortunately, there is
not a way to change the content type
of a file that has already been
uploaded.
You'll need to use one of the API to
re-upload your files with the correct
content type.
You can also use curl from a
linux/MacOS command line if available.
Using your username and api key run
this command...
curl -I -X GET -H "X-Auth-User: USERNAME" -H "X-Auth-Key: API_KEY" https://auth.api.rackspacecloud.com/v1.0
From the output there are 2 important
values.
X-Storage-Url: https://storage101.......
X-Storage-Token: Long hash
You can upload the files with,
curl -X PUT -T test.webm -H "Content-Type: video/webm" -H "Content-Length: FILESIZEINBYTE" -H "X-Auth-Token: TOKEN FROM RESPONSE ABOVE" https://STORAGE URL FROM RESPONSE ABOVE/test.webm
You must specify the content type, and
you must give the correct length of
bytes of what is being uploaded. If
not you will get an invalid request
error.
I work quite a bit with the Rackspace API.
Their API actually allows you to set a container as streaming enabled. My first instinct tells me that you haven't done this.
I stream various file types and they all work an absolute treat.
There is more information about CDN Streaming Enabled containers here:
http://docs.rackspace.com/files/api/v1/cf-devguide/content/Streaming-CDN-Enabled_Containers-d1f3721.html
Hopefully this helps, but if not let me know and I don't mind putting some PHP example code together to help you along.
Its all quite easy, but getting your head around the various API operations that Rackspace have implemented can sometimes be a daunting task.
I don't have a concrete answer, just some thoughts:
what about in other browsers?
it works on the web server with the content type above is text/plain so why force to video/webm?
can Rackspace provide you (or can you find on their site or someone else's) some sample content that does play so you can inspect it?
You could try the free trial from Brightcove or Bitgravity and see if that works...