This seems to only happen in Chrome (latest version 31.0.1650.48 m, but also earlier), but since it doesn't always happen it's hard to say for sure.
When streaming audio stored in Azure Blob storage, Chrome will occasionally play about 30-50% of the track and then stop. It's hard to reproduce, but if I clear the cache and play the file over and over again, it eventually happens. An example file can be found here.
The error is pretty much the same as what's described here, but I've yet to see the problem on any files hosted elsewhere.
Update:
The Azure Blog log only gives AnonymousSuccess messages, no error messages. This is what I get:
1.0;2013-11-14T12:10:10.6629155Z;GetBlob;AnonymousSuccess;200;3002;269;anonymous;;p3urort;blob;"http://p3urort.blob.core.windows.net/tracks/bd2fd171-b3c5-4e1c-97ba-b5109cf15098";"/p3urort/tracks/bd2fd171-b3c5-4e1c-97ba-b5109cf15098";c377a003-ea6b-4982-9335-15ebfd3cf1b1;0;160.67.18.120:54132;2009-09-19;419;0;318;7663732;0;;;"0x8D09A26E7479CEB";Friday, 18-Oct-13 14:38:53 GMT;;"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.48 Safari/537.36";"http://***.azurewebsites.net/";
Apparently you have to set the content type to audio/mpeg3
Here's how I do it:
CloudBlockBlob blockBlob = container.GetBlockBlobReference(fileName);
blockBlob.UploadFromStream(theStream);
theStream.Close();
blockBlob.Properties.ContentType = "audio/mpeg3";
blockBlob.SetProperties();
From here: https://social.msdn.microsoft.com/Forums/azure/en-US/0139d27a-0325-4be1-ae8d-fbbaf1710629/unable-to-load-audio-in-html5-audio-tag-from-storage?forum=windowsazuredevelopment
[edit] - This didn't actually work for me, I'm trying to troubleshoot, but I don't know what's wrong, going to ask a new question.
This mp3 only plays for 1.5 min and then stops. When downloaded, the file plays fully...
https://orator.blob.core.windows.net/mycontainer/zenhabits.net.unsolved.mp3
Related
I'm trying to scrape a specific website's subpages. I'm using requests and bs4. I have the pages stored in a list that I use for looping. The scripts works fine with other websites, so I think I have some problems with the page itself. I can't access the page with my browser(s), or just for a limited time (few seconds). I've tried all of my browsers(Chrome, Firefox, Edge, Explorer) removed every cookie and other browsing datas, etc...)
I'm using headers:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36',
"Upgrade-Insecure-Requests": "1", "DNT": "1",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.5",
"Accept-Encoding": "gzip, deflate"}
and here is the code to request the page:
cz_link= requests.get(cz_page,timeout=10, verify=False,headers=headers)
where "cz_page" is the item in the list that holds the pages I want to parse.
After 5 or 6 pages are loaded the next page won't load.
I've tried "https://downforeveryoneorjustme.com/" to check if the page is up, and it is, "it's just me."
Is there any way that I can access the pages through python requests regardless I'm not able to load the site in my browser(s)?
My next try will be to run the script with VPN on, but I'm curious if there is an other solution, I'm not able to use VPN all the time when I need to run this script.
Thank you!
The solution was to add a delay, but bigger than 5 sec. I experienced with it and it seems that after 5 page is loaded I got blocked and I had to wait for 10 minutes at least to try again.
So I added a counter inside the loop, and after it hit 5 I used time.sleep() for 10 mins and restarted the counter.
It is slow, but it works.
Thanks for the suggestions though!
WHAT IS MY PROBLEM?
My website's live streaming player use hls.js. From my server's stat, there is many case where player get stuck in the middle of a buffered range.
Here is my server raw stat log(removed some useless params):
tm=2019-09-27 12:04:41`bufferLevel=8.447303999999974`currentTime=158.4`buffered=[6.024,166.832]`readyState=4`ua=Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36 QBCore/3.53.1153.400 QQBrowser/9.0.2524.400 Tencent AppMarket/4.8 GameCenter
currentTime is got by HTMLMediaElement.currentTime and buffered is got by HTMLMediaElement.buffered:
currentTime=158.4
buffered=[6.024,166.832]
readyState=4
From W3c:
If HTMLMediaElement.buffered contains a TimeRange that includes the current playback position and enough data to ensure uninterrupted playback:
Set the HTMLMediaElement.readyState attribute to HAVE_ENOUGH_DATA.
Playback may resume at this point if it was previously suspended by a transition to HAVE_CURRENT_DATA.
In this case, 613.3 is in the middle of [469.277,677.612], video should be progressing, but it is not.
Hls.js will periodly check currentTime has progressed every 100ms. if currentTime has not progressed for 1000ms, then hls.js will trigger STALL event and I will send a stall stat to server.
I cannot reproduce this problem on my side, it only appears on my server stat.
WHAT I'VE TRIED
shaka player has a module detect this case(https://www.ellealcatrase.eu/player2/docs/api/lib_media_stall_detector.js.html), Its comment shows that:
Some platforms/browsers can get stuck in the middle of a
buffered range (e.g. when seeking in a background tab). Detect when
we get stuck so that the player can respond.
but I cannot reproduce when my browser is in a background tab.
1) start local web server
C:\Users\Public\Documents\Rick>http-server . -p 8832 --cors
Starting up http-server, serving . on: http://0.0.0.0:8832<br/>
Hit CTRL-C to stop the server<br/><br/>
**partial log** from (node.js) http-server . -p 8832 --cors<br/><br/>
[Mon, 15 Jun 2015 18:14:57 GMT] "GET /2015_03_19_Try6a3D_dae/2015_03_19_Try6a3D/scrn_ground.png" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36"<br/><br/>
2) start html file that loads 2015_03_19_Try6a3D_dae/2015_03_19_Try6a3D.dae
from collada.html (javascript console)<br/><br/>
Uncaught SecurityError: Failed to execute 'texImage2D' on 'WebGLRenderingContext': The cross-origin image at http://localhost:8832/2015_03_19_Try6a3D_dae/2015_03_19_Try6a3D/scrn_ground.png may not be loaded.<br/><br/>
I tried to post the javascript that loads the dae, here, but could not get it to format correctly.
3) There is a brief flash of something before the texture loading errors happen. This dae has been loaded in Sketchup where all the textures appear. Of course, I am confused because cross-origin loading had to be working to load 2015_03_19_Try6a3D.dae in the first place. I will gladly send anyone collada.html, 2015_03_19_Try6a3D.dae, and all related files for them to play with.
I had the same problem. ColladaLoader.js currently does not address CORS out-of-the-box. In order to render your textures, it implements either the Loader class or the ImageLoader class (depending upon the situation). Both need to have the CORS origin assigned to either '' or 'anonymous' if you want to avoid cross-origin errors in all cases for Collada references.
Go to this line in ColladaLoader.js:
texture = loader.load( url );
Add this line right above it:
loader.crossOrigin = '';
Then go to this line in the same script:
loader = new THREE.ImageLoader();
And add this line right below it:
loader.setCrossOrigin( '' );
And voila! My cross-origin errors went away after I made this change.
I ran into this JavaScript error: TypeError: 'undefined' is not an object (evaluating '__gChrome.suggestion.hasNextElement')
I have absolutely no idea where it came from or how it is reproduced. All I know is that it came from three different, unrelated (as in, different companies) people, months apart, all from different pages on our product.
__gChrome.suggestion.hasNextElement is not in our code anywhere so I think it's either a Chrome issue or an extension issue.
All the information I have about this is this (that I can show, the information that is omitted isn't useful anyways):
TypeError: 'undefined' is not an object (evaluating '__gChrome.suggestion.hasNextElement')
Function: send()
Rendering Engine: Mozilla
Browser: Netscape
Version: 5.0 (iPad; CPU OS 7_1 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) CriOS/33.0.1750.21 Mobile/11D167 Safari/9537.53 (05200A98-5316-4F45-882D-7E55DB80E9D4)
Cookies: true
Platform: iPad
User Agent Mozilla/5.0 (iPad; CPU OS 7_1 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) CriOS/33.0.1750.21 Mobile/11D167 Safari/9537.53 (05200A98-5316-4F45-882D-7E55DB80E9D4)
I have tried Googling it but can't find anything. Just curious if anyone has seen this or knows anything about it or can point me in the right direction.
A question regarding Jsoup: I am building a tool that fetches prices from a website. However, this website has streaming content. If I browse manually, I see the prices of 20 mins ago and have to wait about 3 secs to get the current price. Is there any way I can make some kind of delay in Jsoup to be able to obtain the prices in the streaming section? I am using this code:
conn = Jsoup.connect(link).userAgent("Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.72 Safari/537.36");
conn.timeout(5000);
doc = conn.get();
As mentioned in the comments, the site is most likely using some type of scripting that just won't work with Jsoup. Since Jsoup just get the initial HTML response and does not execute any javascript.
I wanted to give you some more guidence though on where to go now. The best bet, in these cases, is to move to another platform for these types of sites. You can migrate to HTMLUnit which is a headless browser, or Selenium which can use HTMLUnit or a real browser like Firefox or Chrome. I would recommend Selenium if you think you will ever need to move past HTMLUnit as HTMLUnit can sometimes be less stable a browser compared to consumer browsers Selenium can support. You can use Selenium with the HTMLUnit driver giving you the option to move to another browser seamlessly later.
You can Use a JavaFX WebView with javascript enabled. After waiting the two seconds, you can extract the contents and pass them to JSoup.
(After loading your url into your WebView using the example above)
String text=view.getEngine() executeScript("document.documentElement.outerHTML");
Document doc = Jsoup.parse(html);