This is regarding the functionality of Google Chromecast. Consider a scenario where I'm playing a YouTube video in my Android mobile. I've an old TV to which I've attached the Chromecast dongle. The TV and my mobile phone are in the same WiFi network.
What I understand, when I tap on "Cast" icon in YouTube app of my mobile, it sends a request to YouTube server to stream media to my Chromecast dongle (which is already registered). So, basically, it is not streaming directly from my mobile phone. From my phone I'm instructing YouTube to send media to one of my other registered device. This explains why the video continues to play even if send YouTube app to background.
My question is, if that is the case, then why is it necessary to have Chromecast and my mobile phone to be in the same Wi-Fi network? I can send "Cast" request from my office network and YouTube should start streaming videos to my TV connected to home network. What am I missing here? Thanks for your help!
You are sending some meta information about the video you are watching to the Chromecast and the Chromecast is fetching and playing the video from Youtube. In the most home setups Youtube would have no chance to connect to your Chromecast and send a file to it.
So for your devices to exchange these meta information they need to be in the same WiFi network.
First, there is the discovery task which, based on the methods used, requires being on the same network. More importantly, the model is to cast content to a TV and doing that when you are not in front of a TV doesn't really make much sense, at least in the large majority of the cases (there is no point for me to cast something to my home TV when I am at work). As was shown at Google I/O this year, we will support a case that a "guest" can participate in casting without being on the local network but that would also requires proximity to the cast device.
Related
I am new to WebXR. I was trying to use webRTC with WebXR. The user will first enter into AR session and then create a WebRTC peer connection but ice candidates are not generated in Chrome for Android of the user is in AR session. As soon as the user gets out of AR session, ice candidates are transferred. Is this a bug in Chrome??
The problem is Hardware related. Some devices allow the use of both Front and back camera simultaneously. In such devices, the code worked properly. In other devices, both front and back camera cannot be accessed simultaneously. Hence, code does not work in these devices. Also, the WebXR Device API does not allow access to camera feed at the moment, however it is a proposed feature.
Although I haven't tried it myself. But you can in theory use the canvas captureStream API to stream webXR canvas.
Can you post your code here. You might want to tweak how you pass stream to the webrtc connection.
As far as I know, it is not possible to use canvas.captureStream() because WebXR doesn't render to the canvas directly.
I am also looking for a way to stream a webXR-Session via WebRTC. So I would be highly interested in your solution shivamag00!
Hope to hear from you!
I use Google chrome as my browser, and when I'm outside of my home, I often tether my laptop to my phone for internet.
I sometimes listen to music or watch videos on youtube in the aforementioned circumstance, often repeating the same videos.
Is it possible to configure the browser to store the data from the videos /forever/ (or at least longer) because I've even left my computer on for a couple days, and as soon as I go offline the video will bug at some point.
It seems senseless to be continually reloading streamed videos and in so doing, eat up all of my limited data.
If the browser can temporarily store the video (which it must do) is there some way to extend the lifetime of that data?
My WP8 app has some audio or videos, I'd like to share them with other plateform devices, such as iPhone or android devices. The first thing that comes to mind is Bluetooth. Can I realize this feature in my app? And how to do it? Thank you!
Sure you can do that, its the same like all file sharing apps do (whatsapp for Example) but your are about to do this using bluetooth connection, you just need to build a connection and transfer the file to the target device which should be able to open the file according to its format and the installed apps on the device.
I've worked with iOS Bluetooth extensively and the different ways you can share data over Bluetooth is by:
Using one of the Bluetooth profiles already supported by iOS: http://support.apple.com/kb/ht3647
Bluetooth LE (Core Bluetooth). I haven't used this, but the bandwidth and data structure of data being transmitted is limited, so it may not suit your purpose of sending audio and video.
Game Kit. This is for iPhone-to-iPhone data transmission though.
External Accessory Framework. This framework allows you to transmit raw data, but is only available to BT-enabled devices with special Apple authentication hardware (you have to join the MFi program and go all these hoops to get your device qualified). This doesn't work for you too, since you'd want to send data from WP8 or Android, which are definitely not MFi qualified.
So, bottom line, you CAN NOT send raw audio or video data from WP8 and Android to your iPhone unless you jailbreak the iPhone and put a new stack. iOS's BT stack is really limiting in that way, as I've learned the hard way.
On the bright side, you can definitely send raw data between Android and WP8 over Bluetooth. You have to create an RFCOMM socket on both ends (one sending; one listening). The Bluetooth profile used for this sort of data transfer is called a Serial Port Profile.
I am trying to develop a test taking website for students. In this website, students should be able to answer the questions(displayed in text format) by using webcam in one go. Currently I have implemented this feature using Flash, it captures the frames and simultaneously sends it to the server. The problem with this technique is that the quality(FPS) of my video is restricted and is dependent on the bandwidth of the internet connection. Also I am not in favor of using flash.
I want that as soon as student clicks on the start button, a timer should start to record the video. The video should get saved on the client's machine (without asking the client to mention the path) and on completion of video, it should automatically get uploaded on the server and when uploading gets completed, the video should be automatically deleted from the client's machine.
In short can anyone give me a starting point, so as to I can proceed with the work. Any helo will be highly appreciated.Thanks!
Here is a good example how to get webcam working on html5:
http://blog.teamtreehouse.com/accessing-the-device-camera-with-getusermedia
It doesnt tell how to upload the video to the server.
Currently I have implemented this feature using Flash, it captures the frames and simultaneously sends it to the server. The problem with this technique is that the quality(FPS) of my video is restricted and is dependent on the bandwidth of the internet connection.
That is actually incorrect.
The fps you're getting depends 100% on:
the webcam quality
the light available in the room (the more light the better)
The resolution you're recording at (lower res results in higher fps even with low quality webcams in low light)
The video should get saved on the client's machine (without asking the client to mention the path) and on completion of video, it should automatically get uploaded on the server and when uploading gets completed, the video should be automatically deleted from the client's machine.
Flash records by streaming (through rtmp) the audio/video data to a media server (Red5, AMS, Wowza). After the recording is stopped you could move the file to a web server and trigger a http download.
In what regards HTML the Media Recording API has been implemented by Firefox and Chrome 49 and it allows you to record to local RAM and download the file as .webm (the audio video codecs might differ btwn browsers).
Disclaimer: I work at Pipe which handles video recording.
I'm looking for a way to securely deliver video to mobile devices. There are two options:
HLS in tag. This works very nicely for iOS and supports adaptive bitrate, perfect for mobile. However, is seems to only work well on iOS. There seems to be only fragmented support for it on Android. I've read that Android has officially supported it since 3.0, but on all the android devices I've tested (>3.0), HLS hasn't played back on the browser.
Progressive download in tag. This will work on iOS and Android devices fine, but the concern is that since it's just a progressive download of the video, that the user find a way to just grab that video once the browser has finished downloading it. This may be more difficult on iOS, but I'm sure it's not that hard to figure out where the browser stored the video download in a tmp folder somewhere.
Either method I'd say can be protected from deeplinking by using an expiring token approach, where the token is generated serverside with a secret key that only the content server knows about. The video request would only be valid for 5 or 10 minutes, would would kill of deeplinking.
Is anyone aware of any way around these issues? Even if I was able to prevent deeplinking, the user could still get the video itself and re-distribute. Perhaps it's just not possible?
Thanks
Rule #1 of the internet:
If you don't want someone stealing it, don't put it online.
Welcome to the circumvention arms race. Brought to you by DownloadHelper.
There's nothing you can do to stop someone who really wants to pirate your video. There are various measures, like those you mention, that make it more difficult, but someone who really wants to copy it could find a way to capture it from memory, or even just point a camera at the screen and record the playback of the video.
It's the same way you protect your car. You install a steering lock, an alarm and an engine immobiliser, and then someone comes alongs and pulls the car onto a flat-bed truck and drives away with it.
Bottom line - you can't stop a determined thief, but you can make theft more difficult so that you're not the most attractive target.
As I was reading the above I could easily get pass all these techniques pretty quickly.
For a project I can't describe too much because of nda, we created our own protocol based on a well known encryption method can't mention that either , military grade) , encoded packets on the server to the protocol, and decoded on the device.
unfortunately this isn't perfect either because a lot of mobile apps can be re-versed engineered and once you get the key game over, very easy on android, of course you could periodically recycle the key, in which case even if they decompiled the android app and got the key it wouldn't work very long.
This is a lot of work and can't be implemented with html5 or hLS or event rtsp.
It also requires a custom server application that takes the video stream re-transmits it with the custom protocol.
On the other hand the protocol was transport agnostic, which meant we could use a variety of transports, tcp, IAP and bluetooth. Also would work on all mobile / desktop platforms.
The other little requirement, is couldn't use a browser, have to be a custom app.