so I'm encountering some unexpected behavior with subtitles on an HTML5 video.
I am storing my video + subtitles files on Google Cloud Storage and I have a web interface to watch the movies.
My server generates signed URLs for the movie and subtitle files. The movies play fine, and I can view the subtitle text content using the URL. However, using the <track> does not work:
<track label="my_subtitle_file" src="my_signed_url" srclang="en" kind="subtitles">
In Firefox I get an error:
Security Error: Content at <my site> may not load data from <signed GCS url>
In Chrome I get a slightly different error, but I imagine it's the same problem:
Unsafe attempt to load URL <signed GCS URL> from frame with URL <my site>.
Domains, protocols and ports must match.
I suppose I could make a wrapper endpoint on my backend, which fetches the content of the file and serves it. But I am wondering, is there another way? And why do <track> nodes have this severe restriction?
edit
I added a CORS policy to the GCS bucket:
[
{
"origin": ["https://my-domain.com"],
"responseHeader": ["Content-Type"],
"method": ["GET"],
"maxAgeSeconds": 3600
}
]
On the GCS Console I also tried removing the "prevent public access setting". I didn't make the files public though (I still want to require presigned URLs), but it seems GCS is kinda wierd in that you can turn of "prevent public access" without actually making it public.
Related
I have been having a long and frustrating experience trying to get AASA to work for webcredentials. My goal here is to allow usernames and passwords to be stored in the iOS keychain.
I did have this working on a root domain the other week but it is not sufficient for my scenario as I will explain. It didn't work for me straight away I have to say but it eventually started working after a clean build so I thought this was the issue then but now I am not so sure.
I am using Expo with EAS build. We have a multi-tenant application. From a single codebase we deploy to multiple apps in the store. All are on the same team ID but they are separate applications and use separate credentials, nothing is shared.
I am confident my apps textContentType of username and password on my TextFields is correct as this has not changed from when I managed to get it working originally and I have checked it countless times.
Expectation
For the "Save Password" prompt to be displayed after login. What I have noticed however is when going to store a password manually using "add password" via iCloudKeychain from the keyboard accessory this does accurately show the correct "TENANT_SUBDOMAIN.example.com". I find this confusing.
Goal Scenario
I am hosting a site on Netlify. I have it setup to support wildcard subdomains with a LetsEncrypt provisioned wildcard SSL certificate. I then have edge functions which change the content of my index.html and apple-app-site-association file dynamically based on the requested subdomain.
I have added the Associated Domains capability to my provisioning profile.
I am using the latest Expo 47 and EAS build. I have added in the appropriate associated domains configuration and I can see this when introspecting my entitlements under com.apple.developer.associated-domains and it is correct.
I am using TestFlight for testing. I am doing a --clean-build on EAS every time and I also increase the runtime version. I have also tried manually refreshing credentials outside of the build process which does this automatically. This must be using the correct provisioning profile otherwise I would get a build failure as the requested entitlements wouldn't match.
The AASA file is currently hosted just in the .well-known directory. I have tried using the root and also tried using both. There are no redirects taking place.
I am aware the AASA file is pulled on application installation and update. I repeatedly remove the apps and then reboot my phone in an attempt to reset any device caches.
The content-type of the file is application/json and I have confirmed this using developer tools in the browser.
There is no robots.txt or anything blocking the request from an infrastructure perspective. There are no additional firewalls or geo restricted access as I am just using plain Netlify to host this, nothing fancy.
I am confident the Team ID and bundle IDs are correct in the AASA file.
I remove the content-length header in the Edge function so it is correctly calculated by the network instead and I have confirmed this using curl.
When I check the file using https://app-site-association.cdn-apple.com/a/v1/site.example.com Apple has the correct file cached on it's CDN so I would expect it to work.
I added in an applinks section so I could use the Apple App Search API validation tool and the Branch.io AASA verification tool to verify correctness. Branch.io says the file is fine and Apple says it's fine also but because the App has not been deployed to the store yet I see Error no apps with domain entitlements. From what I can tell this is normal in development and makes sense as it uses the current released version of the app to verify the deep link configuration. So to me this means Apple can parse the file correctly.
When I stream my device console logs; on install I can see the AASA requesting the correct domains. I see no errors on swcd I just see the Beginning data task AASA-XXXX with the correct domains.
When I run Charles proxy on my phone with a verified SSL installation (also reinstalled a few times now) I do not see quite what I would expect - but the device logs seem to imply it is doing the correct thing. When I view the app-site-association... URL requests in Charles there is one per application install which is correct. The request is marked as Unknown and when I look at the request the host is shown but as you would expect from SSL I see no path. The info says METHOD: CONNECT with Error - Input Error: EOF. This is the only error I see, I am not sure if it is a red herring and something to do with Charles. Given the error as you expect there is no body in the request or response. It is worth noting in general testing I have no VPN enabled and I have do not have Private Relay enabled in my iOS settings.
When I perform a sysdiagnose I see the following at the timestamp in my console log in the swcutil_show.txt device log. This looks correct in comparison to other apps webcredentials and applinks services I see there and I see no errors:
Service: webcredentials
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x141816200> { v = 0, t = 0x8, u = 0x1e7c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x7c1e000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-09 14:14:32 +0000
Next Check: 2022-12-14 14:03:00 +0000
Service: applinks
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x13fd38d00> { v = 0, t = 0x8, u = 0x219c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x9c21000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
Patterns: {"/":"*"}
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-13 13:13:23 +0000
Next Check: 2022-12-18 13:01:51 + 0000
At end of file:
MYTEAMID.com.cf.example.b2c.ios: 8 bytes
(This seems correct for all apps)
Other Scenario
I have tried setting this up using an apex on another domain which hasn't been seen before by Apple. I have tried using a subdomain with a root domain serving the same content and I have tried the subdomain and root domain on their own. I have also tried not using the Edge functions and having static files but to no avail.
When I do this I ensure I wait for the Apple CDN to catch up and remove/add entries prior to deleting the apps, rebooting my device, and reinstalling to test.
AASA File
AASA content comes back with the correct payload and Content-Type: application/json and Content-Length headers, both from Apples CDN and the origin. When I had this somehow working in my initial test it was on a root domain and I did not have an applinks section, this was only added so I could use the verification tools for universal links.
I am not sending back different content or duplicated content and I block the www subdomain - I have also tried it with a www subdomain for the record.
{
"applinks": {
"details": [
{
"appIDs": [
"MYTEAMID.com.cf.example.b2c.ios"
],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL with a fragment that equals no_universal_links and instructs the system not to open it as a universal link."
}
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
I have also tried this with the older format:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "MYTEAMID.com.cf.example.b2c.ios",
"paths": [
"*"
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
associatedDomains iOS. expo config
associatedDomains: [
`webcredentials:${SUBDOMAIN}.example.app`,
`applinks:${SUBDOMAIN}.example.app`,
],
Help :)
I have been trying to get this to work for a long time now and I am completely out of ideas. If anybody has any suggestions I would really appreciate it. I am very confused how the devices request seems correct and the CDN content is correct but it is still not working. It's worth also reiterating that I need to have different subdomains for each tenant as the credentials must not be shared across apps so the keychain->domain association store must be different.
I am wondering if it's the LetsEncrypt wildcard SSL certificate but I wouldn't expect it to verify and for Apple to cache the file if this was the case. It seems very unlikely to me but it is the only thing I haven't tried at this point.
Many Thanks,
Mark
Up until several weeks ago I was able to stream icecast and shoutcast on my HTTPS site. This would create a "mixed content" warning but was never explicitly blocked.
Now I find that chrome is forcing the http://streaminglink urls to load https://streaminglink and I can't access the http audio anymore.
Here is a code example in jPlayer
$("#jquery_jplayer").jPlayer("setMedia", {
mp3:"http://149.202.79.68:8213/stream.mp3"
});
I expect chrome to load the http url but instead it is looking for the https and I get the following error in the console:
GET https://149.202.79.68:8213/stream.mp3 net::ERR_CONNECTION_CLOSED
NOTE
The https ^ - that's not coming from my code or configuration... =/
So it looks like this is default behavior for Chrome since 79.
https://www.engadget.com/2019/10/04/chrome-security-block-http-content/
Broke my site. Thanks Google.
you can now allow insecure content in the specific site settings
chrome://settings/content/siteDetails?site=https%3A%2F%2F<SITE_DOMAIN>
I've been working on an Spring MVC application that has custom error pages, these pages return a generic error message and the stack trace as an HTML comment. I'm currently developing the offline funcitonalities of this application using HTML5's Appcache. My manifest is something like this:
CACHE MANIFEST
CACHE:
... my explicit entries (not relevant) ...
FALLBACK:
... some fallback entries (not relevant) ...
<!-- This next line is relevant --!>
/ pageNotFoundOffline.html
SETTINGS:
prefer-online
Which is supposed to serve a previously cached 404 page when the user can't connect to the server, the problem is that it also serves this 404 page when an error occurs, thus rendering completely useless the custom error page already implemented in the application.
Why i want to do this? i want that whenever a user tries to access any page on my application and the request fails with a 404 (either because the is no available internet connection or because the servers are down), that user is informed that the request failed and he or she is being redirected to our offline functionalities, also i want to inform the user when he or she succesfully reached our servers but an internal error occurred (through the custom error page).
Is there a workaroud for this problem?, what i would like to accomplish is the cached 404 page to be served only when there is a 404 exception and the custom error page returned by the server to be displayed when there was an internal error.
I'm afraid that it is not possible with appcache - the fallback intercepts all server errors. The specification for appcache says: "If this results in a redirect to a resource with another origin (indicative of a captive portal), or a 4xx or 5xx status code or equivalent, or if there were network errors (but not if the user canceled the download), then instead get, from the cache, the resource of the fallback entry corresponding to the fallback namespace f. Abort these steps."
If you can use the more modern way of doing things, Service Workers, I would recommend using that, as that will let you do what you want.
This is my first post in Stackoverflow and I have tried to search for the answer to a problem I am currently having with CloudFront serving up static S3 website page, to be precise, custom 404 error page. Hope you can help me out :=))
I am not using any code, simply using the AWS console as a POC. Here is the scenario:
a) I have created two buckets. The names are (as an example): mybucket.com and www.mybucket.com.
b) I have placed my static site (a very simple one) inside the mybucket.com and redirect www.mybucket.com to it.
c) The content bucket (mybucket.com) has an index.html file, an image file. I have created a folder under the bucket (called error) and placed a custom error message file called 404error.html in it.
d) The index.html file also calls a simple JavaScript code that loads the contents of another file (welcome.html) from another bucket (resource.mybucket.com). I have ensured that bucket is enabled for CORS and it is working.
d) The bucket has a bucket policy that allows everyone access to the bucket and it's contents. The bucket polcy is shown below:
{
"Id": "Policy1402570669260",
"Statement": [
{
"Sid": "Stmt1402570667997",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket.com/*",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
e) I have ensured the www.mybucket.com and resource.mybucket.com also has the same policy.
f) mybucket.com has been configured for static website hosting and the error file for mybucket.com has been configured to be error/404error.html.
c) If I access the site using the S3 URL (mybucket.com.s3-website-.amazonaws.com), and try to access a non-existent file (say myfile.html), it correctly shows the custom 404 error page.
The problem arises when I try to access the page using the CloudFront distribution. I created a CloudFront distribution on the S3 bucket (mybucket.com) and here are the properties I set:
a) Error Page:
i) HTTP Error Code: 404-Not Found
ii) Error Caching TTL: 300
iii) Customize Error Response: Yes
iv) Response Page Path: /error/404error.html
v) HTTP Response Code: OK
b) A separate cache behaviour was set as well:
i) Path Pattern: /error/*
ii) Restrict Viewer Access: No
I am keeping it very simple and standard. I am not forwarding cookies or query strings or using any signed URLs etc.
Once the distribution is created, when I try to access the site with CloudFront URL, the main page works fine. If I try to test with a non existent page, however, I am not served with the custom 404 error page that I configured. Instead, I get the following XML file in the browser (Chrome/FireFox -latest):
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>EB4CA7EC05512320</RequestId>
<HostId>some-really-big-alphanumeric-id</HostId>
</Error>
No clue is shown in the the Console when I try to inspect elements from the browser.
Now I know this AccessDenied error has been reported and discussed before and I have tried what most suggest: giving full access to the bucket. I have ensured that (as you can see from the bucket policy above, access is open for anybody). I have also tried to ensure Origin Access ID has been given GetObject permission. I have also dropped and recreated the CloudFront distribution and also deleted/re-uploaded the error folder and the 404error.html file within the folder. The error file is manually accessible from the CloudFront URL:
http://xxxxxxxx.cloudfront.net/error/404error.html
But it does not work if I try to access an arbitrary non-existent file:
http://xxxxxxxx.cloudfront.net/myfile.html
Is there something I am missing here?
I really appreciate your help.
Regards
Here is a rudimentary policy for making your S3 bucket work with CloudFront custom error pages.
{
"Version": "2012-10-17",
"Statement": [{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<yourBucket>",
"Principal": "*"
}, {
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<yourBucket>/*",
"Principal": "*"
}]
}
As Ben W already pointed out, the trick is to give the ListBucket permission. Without that you will get an access denied error.
It may also be worth mentioning that 5xx errors only make sense if you serve them from another bucket than the bucket where your website is on.
Also a 404 error should respond with a 404 error code, even on your custom error page, and not just suddenly turn into a code 200. Same for the other error codes, of course.
If you setup a bucket (the-bucket) quickly, you may set it up without List permissions for Everyone. This will stop CloudFront from determining if an asset is 404 correctly.
Your uploader tool may be uploading with read permissions on each object - so you will not notice this lack of permissions.
So if you request <URL>/non-existent.html, CloudFront tries to read the bucket e.g. http://the-bucket.s3.amazonaws.com/non-existent.html
if list permissions are granted to everyone, a 404 is returned, and CloudFront can remap the request as a 200 or a custom 404
if list permissions are not granted to everyone, a 403 is returned, and CloudFront returns the 403 to the end user (which is what you are seeing in the log).
It makes perfect sense, but is quite confusing!
I got these hints from http://blog.celingest.com/en/2013/12/12/cloudfront-configuring-custom-error-pages/ and your question might also be related to https://serverfault.com/questions/642511/how-to-store-a-cloudfront-custom-error-page-in-s3/
You need another S3 bucket to host your error pages
You need to add another CloudFront origin pointing to the bucket where your error pages are
The cache behaviour of the newly-created origin should have a Path Pattern pointing to the folder (in the error page bucket) where the error pages reside
You can then use that path in the Response Page Path when you create the Custom Error Response config
S3 permissions have changed in 2019. If you're reading this since 2019, you can't follow any of the above advice! I made a tutorial to follow on Youtube:
https://www.youtube.com/watch?v=gBkysF8_-Es
I ran into this when setting up a single-page app where I wanted every missing path to render /index.html.
If you set up the CloudFront "Error pages" handling to redirect from HTTP error code 403 to the path /index.html with response code 200, it should just work.
If you set it to handle error code 404, you'll get AccessDenied unless you give everyone ListBucket permissions, like some other answers describe. But you don't need that if you handle 403 instead.
Is it possible for a Chrome extension to listen for streaming audio from any of the browser's tabs? I would like to capture the streaming audio data and then analyse it.
Thanks
You could try 3 ways, neither one does provide 100% guarantee to meet your needs.
Before going into more detailed descriptions, I must note that Chrome extensions do not provide convenient tools for working on per connection level - sufficiently low level, required for stream capturing. This is by design. This is why the 1-st way is:
To look at other browsers, for example Firefox, which provides low-level APIs for connections. They are already known to be used by similar extensions. You may have a look at MediaStealer. If you do not have a specific requirement to build your system on Chrome, you should possibly move to Firefox.
You can develop a Chrome extension, which intercepts HTTP-requests by means of webRequest API, analyses their headers and extracts media urls (such as containing audio/mpeg MIME-type, for example, in HTTP-headers). Just for a quick example of code you make look at the following SO question - How to change response header in Chrome. Having the url you may force appropriate media download as a file. It will land in default downloads folder and may have unfriendly name. (I made such an extension, but I do not have requirements for further processing). If you need to further process such files, it can be a challenge to monitor them in the folder, and run additional analysis in a separate program.
You may have a look at NPAPI plugins in general, and their streaming APIs in particular. I can imagine that you create a plugin registered for, again, audio/mpeg MIME-type, and receives the data via NPP_NewStream, NPP_WriteReady and NPP_Write methods. The plugin can be wrapped into a Chrome extension. Though I made NPAPI plugins, I never used this API, and I'm not sure it will work as expected. Nethertheless, I'm mentioning this possibility here for completenees. This method requires some coding other than web-coding, meaning C/C++. NB. NPAPI plugins are deprecated and not supported in Chrome since September 2015.
Taking into account that you have some external (to the extension) "fingerprinting service" in mind, which sounds like an intelligent data processing, you may be interested in building all the system out of a browser. For example, you could, possibly, involve a HTTP-proxy, saving media from passing traffic.
If you're writing a Chrome extension, you can use the Chrome tabCapture API to record audio.
chrome.tabCapture.capture({audio: true}, function(stream) {
var recorder = new MediaRecorder(stream);
[...]
});
The rest is left as an exercise to the reader; MDN has more documentation on how to use MediaRecorder.
When this question was asked in 2013, neither chrome.tabCapture nor MediaRecorder existed.
Mac OSX solution using soundflower: http://rogueamoeba.com/freebies/soundflower/
After installing soundflower it should appear as a separate audio device in the sound preferences (apple > system preferences > sound). Divert the computer's audio to the 2ch option (stereo, 16ch is surround), then inside a DAW, such as 'audacity', set the audio input as soundflower. Now the sound should be channeled to your DAW ready for recording.
Note: having diverted the audio from the internal speakers to soundflower you will only be able to hear the audio if the 'soundflowerbed' app is actually open. You know it's open if there's a 8 legged blob in the top right task bar. Clicking this icon gives you the sound flower options.
My privoxy has the following log:
2013-08-28 18:25:27.953 00002f44 Request: api.audioaddict.com/v1/di/listener_sessions.jsonp?_method=POST&callback=_AudioAddict_WP_ListenerSession_create&listener_session%5Bid%5D=null&listener_session%5Bis_premium%5D=false&listener_session%5Bmember_id%5D=null&listener_session%5Bdevice_id%5D=6&listener_session%5Bchannel_id%5D=178&listener_session%5Bstream_set_key%5D=webplayer&_=1377699927926
2013-08-28 18:25:27.969 0000268c Request: api.audioaddict.com/v1/ping.jsonp?callback=_AudioAddict_WP_Ping__ping&_=1377699927928
2013-08-28 18:25:27.985 00002d48 Request: api.audioaddict.com/v1/di/track_history/channel/178.jsonp?callback=_AudioAddict_TrackHistory_Channel&_=1377699927942
2013-08-28 18:25:54.080 00003360 Request: pub7.di.fm/di_progressivepsy_aac?type=.flv
So I got the stream url and record it:
D:\Profiles\user\temp>wget pub7.di.fm/di_progressivepsy_aac?type=.flv
--18:26:32-- http://pub7.di.fm/di_progressivepsy_aac?type=.flv
=> `di_progressivepsy_aac#type=.flv'
Resolving pub7.di.fm... done.
Connecting to pub7.di.fm[67.221.255.50]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [video/x-flv]
[ <=> ] 1,234,151 8.96K/s
I got the file that can be reproduced in any multimedia pleer.