I am trying to make an application that gets the streaming video data using JavaCV and send it to Web socket server. Then, the Web socket server distributes the video data to connected client(s).
1. Application gets live streaming data(MP4) from my PC's webcam using JavaCV.
2. Application keeps sending it to web socket server.
3. The server gets it as binary and send it to connected client.
4. Web browser connects to the server and Javascript running on browser shows the live video after receiving it from server.
I am new to JavaCV and OpenCV. The snippet below works for drawing video using com.googlecode.javacv.CanvasFrame with no problem. However, I am not sure how to grab MP4 data as live streaming data.
try {
FrameGrabber grabber = FrameGrabber.createDefault(0);
grabber.setFormat("mp4");;
grabber.setFrameRate(30);
grabber.setImageWidth(640);
grabber.setImageHeight(480);
grabber.start();
double frameRate = grabber.getFrameRate();
long wait = (long) (1000 / (frameRate == 0 ? 10 : frameRate));
ByteBuffer buf = null;
while (true) {
Thread.sleep(wait);
IplImage image = grabber.grab();
if(image != null) {
buf = image.getByteBuffer();
send(buf); // Send video data using web socket.
}
}
} catch (FrameGrabber.Exception ex) {
Logger.getLogger(WSockClient.class.getName()).log(Level.SEVERE, null, ex);
}
The below are the code for web socket server and Javascript/HTML5 client.
The both code works fine if the application reads MP4 file from local disk, and send it to the server. But the data using JavaCV like above seems to be invalid for showing on browser.
The server just receives the data and distribute it to the clients. I believe there seems to have no problem. Here is the code.
#OnMessage
public void binaryMessage(ByteBuffer buf, Session client) throws IOException, EncodeException {
for (Session otherSession : peers) {
if (!otherSession.equals(client)) {
otherSession.getAsyncRemote().sendBinary(buf, new StreamHandler());
}
}
}
Here is the Javascript/HTML5.
<body>
<video controls width="640" height="480" autoplay></video><br>
<canvas id="canvas1" width="640" height="480"></canvas><br>
</body>
<script>
var ws;
var protocol = 'ws';
var host = "localhost:8080";
var url = protocol + "://" + host + "/live/stream";
ws = new WebSocket(url);
ws.binaryType = 'arraybuffer';
ws.addEventListener("open",onOpenWebSocket,false);
ws.addEventListener("close",onCloseWebSocket,false);
ws.addEventListener("message",onMessageWebSocket,false);
window.addEventListener("unload",onUnload,false);
...
function onMessageWebSocket(event){
var blob = new Blob([event.data], {type: 'video/mp4'});
var rb = blob.slice(0, blob.size, 'video/mp4');
video.src = window.URL.createObjectURL(rb);
}
...
</script>
I think I need to obtain MP4 data as stream using JavaCV, but don't know how to do that. Please help.
Any comment, suggestions would be appreciated. Thank you.
Related
I want to play stream from gstreamer in a web browser.
I played around a with RTP, WebRTC and SDP files but, while VLC was able to connect to stream by simple SDP, browsers were not. I later understood that WebRTC requires secure connection which only complicates things and is not needed for my purposes. I stumbled upon Media Source Extension (MSE) of html5, which seems that it could help, but I'm not able to find some comprehensive tutorial or appropriate specs on how to get gstreamer to stream correct data and later how to play them using MSE. I'm also not sure about latency with using MSE.
So is there a way to play stream from gstreamer in a browser?
Thanks.
Using node webrtc project, I was able to combine output from gstreamer with webrtc call. For gstreamer, there is a project which enables it's use with node gstreamer superficial. So basically, you need to run gstremaer process from node process, which can then control output from gstremaer. On every gstreamer frame there is a callback called which takes the frame and can send it to webrtc calls.
Then an webrtc calls needs to be implemented. There is required some signaling protocol for calls. One side of the call will be the server and another will be the client's browser, instead of two browsers. Then a video track will be created where frames from gstreamer superficial will be pushed.
const { RTCVideoSource } = require("wrtc").nonstandard;
const gstreamer = require("gstreamer-superficial");
const source = new RTCVideoSource();
// This is WebRTC video track which should be used with addTransceiver see below
const track = source.createTrack();
const frame = {
width: 1920,
height: 1080,
data: null
};
const pipeline = new gstreamer.Pipeline("v4l2src ! videorate ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=25/1 ! videoconvert ! video/x-raw,format=I420 ! appsink name=sink");
const appsink = pipeline.findChild("sink");
const pull = function() {
appsink.pull(function(buf, caps) {
if (buf) {
frame.data = new Uint8Array(buf);
try {
source.onFrame(frame);
} catch (e) {}
pull();
} else if (!caps) {
console.log("PULL DROPPED");
setTimeout(pull, 500);
}
});
};
pipeline.play();
pull();
// Example:
const useTrack = SomeRTCPeerConnection => SomeRTCPeerConnection.addTransceiver(track, { direction: "sendonly" });
I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.
I'm trying to stream audio file to Angular application where is html5 audio element and src set to my api end point (example. /audio/234). My backend is implemented with .NET Core 2.0. I have implemented already this kind of streaming: .NET Core| MVC pass audio file to html5 player. Enable seeking
Seek works if I don't seek to end of file immediately when audio starts playing. I use audio element's autoplay attribute to start playing immediately audio element has enough data. So in my situation audio element has not yet all the data when I seek so it make new GET to my API. In that situation in my backend log there is this Exception:
fail: Microsoft.AspNetCore.Server.Kestrel[13]
[1] Connection id "0HL9V370HAF39", Request id "0HL9V370HAF39:00000001": An unhandled exception was thrown by the application.
[1] System.InvalidOperationException: Response Content-Length mismatch: too few bytes written (0 of 6126919).
Here is my audio controller GET method.
byte[] audioArray = new byte[0];
//Here I load audio file from cloud
long fSize = audioArray.Length;
long startbyte = 0;
long endbyte = fSize - 1;
int statusCode = 200;
var rangeRequest = Request.Headers["Range"].ToString();
_logger.LogWarning(rangeRequest);
if (rangeRequest != "")
{
string[] range = Request.Headers["Range"].ToString().Split(new char[] { '=', '-' });
startbyte = Convert.ToInt64(range[1]);
if (range.Length > 2 && range[2] != "") endbyte = Convert.ToInt64(range[2]);
if (startbyte != 0 || endbyte != fSize - 1 || range.Length > 2 && range[2] == "")
{ statusCode = 206; }
}
_logger.LogWarning(startbyte.ToString());
long desSize = endbyte - startbyte + 1;
_logger.LogWarning(desSize.ToString());
_logger.LogWarning(fSize.ToString());
Response.StatusCode = statusCode;
Response.ContentType = "audio/mp3";
Response.Headers.Add("Content-Accept", Response.ContentType);
Response.Headers.Add("Content-Length", desSize.ToString());
Response.Headers.Add("Content-Range", string.Format("bytes {0}-{1}/{2}", startbyte, endbyte, fSize));
Response.Headers.Add("Accept-Ranges", "bytes");
Response.Headers.Remove("Cache-Control");
var stream = new MemoryStream(audioArray, (int)startbyte, (int)desSize);
return new FileStreamResult(stream, Response.ContentType)
{
FileDownloadName = track.Name
};
Am I missing some Header or what?
I didn't get this exception with .NET Core 1.1 but I'm not sure is it just coincident and/or bad testing. But if anybody has information is there something changed in .NET Core related to streaming I will appreciate that info.
Now when I research more I found this: https://learn.microsoft.com/en-us/aspnet/core/aspnetcore-2.0 look Enhanced HTTP header support- heading. It says this
If an application visitor requests content with a Range Request header, ASP.NET will recognize that and handle that header. If the requested content can be partially delivered, ASP.NET will appropriately skip and return just the requested set of bytes. You do not need to write any special handlers into your methods to adapt or handle this feature; it is automatically handled for you.
So all I need is some clean up when I move to .NET Core 1.1 to 2.0 because there is already handler for those headers.
byte[] audioArray = new byte[0];
//Here I get my MP3 file from cloud
var stream = new MemoryStream(audioArray);
return new FileStreamResult(stream, "audio/mp3")
{
FileDownloadName = track.Name
};
Problem was in Headers. I don't know exactly which header was incorrect or was my stream initialization incorrect but now It's working. I used this https://stackoverflow.com/a/35920244/8081009 . Only change I make this was renamed it as AudioStreamResult. And then I used it like this:
Response.ContentType = "audio/mp3";
Response.Headers.Add("Content-Accept", Response.ContentType);
Response.Headers.Remove("Cache-Control");
var stream = new MemoryStream(audioArray);
return new AudioStreamResult(stream, Response.ContentType)
{
FileDownloadName = track.Name
};
Notice that I pass full stream to AudioStreamResult.
var stream = new MemoryStream(audioArray);
I've learnt a lot in the last 48 hours about cross domain policies, but apparently not enough.
Following on from this question. My HTML5 game supports Facebook login. I'm trying to download profile pictures of people's friends. In the HTML5 version of my game I get the following error in Chrome.
detailMessage: "com.google.gwt.core.client.JavaScriptException:
(SecurityError) ↵ stack: Error: Failed to execute 'texImage2D' on
'WebGLRenderingContext': Tainted canvases may not be loaded.
As I understand it, this error occurs because I'm trying to load an image from a different domain, but this can be worked around with an Access-Control-Allow-Origin header, as detailed in this question.
The URL I'm trying to download from is
https://graph.facebook.com/1387819034852828/picture?width=150&height=150
Looking at the network tab in Chrome I can see this has the required access-control-allow-origin header and responds with a 302 redirect to a new URL. That URL varies, I guess depending on load balancing, but here's an example URL.
https://fbcdn-profile-a.akamaihd.net/hprofile-ak-xap1/v/t1.0-1/c0.0.160.160/p160x160/11046398_1413754142259317_606640341449680402_n.jpg?oh=6738b578bc134ff207679c832ecd5fe5&oe=562F72A4&gda=1445979187_2b0bf0ad3272047d64c7bfc2dbc09a29
This URL also has the access-control-allow-origin header. So I don't understand why this is failing.
Being Facebook, and the fact that thousands of apps, games and websites display users profile pictures, I'm assuming this is possible. I'm aware that I can bounce through my own server, but I'm not sure why I should have to.
Answer
I eventually got cross domain image loading working in libgdx with the following code (which is pretty hacky and I'm sure can be improved). I've not managed to get it working with the AssetDownloader yet. I'll hopefully work that out eventually.
public void downloadPixmap(final String url, final DownloadPixmapResponse response) {
final RootPanel root = RootPanel.get("embed-html");
final Image img = new Image(url);
img.getElement().setAttribute("crossOrigin", "anonymous");
img.addLoadHandler(new LoadHandler() {
#Override
public void onLoad(LoadEvent event) {
HtmlLauncher.application.getPreloader().images.put(url, ImageElement.as(img.getElement()));
response.downloadComplete(new Pixmap(Gdx.files.internal(url)));
root.remove(img);
}
});
root.add(img);
}
interface DownloadPixmapResponse {
void downloadComplete(Pixmap pixmap);
void downloadFailed(Throwable e);
}
are you setting the crossOrigin attribute on your img before requesting it?
var img = new Image();
img.crossOrigin = "anonymous";
img.src = "https://graph.facebook.com/1387819034852828/picture?width=150&height=150";
It's was working for me when this question was asked. Unfortunately the URL above no longer points to anything so I've changed it in the example below
var img = new Image();
img.crossOrigin = "anonymous"; // COMMENT OUT TO SEE IT FAIL
img.onload = uploadTex;
img.src = "https://i.imgur.com/ZKMnXce.png";
function uploadTex() {
var gl = document.createElement("canvas").getContext("webgl");
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
try {
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
log("DONE: ", gl.getError());
} catch (e) {
log("FAILED to use image because of security:", e);
}
}
function log() {
var div = document.createElement("div");
div.innerHTML = Array.prototype.join.call(arguments, " ");
document.body.appendChild(div);
}
<body></body>
How to check you're receiving the headers
Open your devtools, pick the network tab, reload the page, select the image in question, look at both the REQUEST headers and the RESPONSE headers.
The request should show your browser sent an Origin: header
The response should show you received
Access-Control-Allow-Methods: GET, OPTIONS, ...
Access-Control-Allow-Origin: *
Note, both the response AND THE REQUEST must show the entries above. If the request is missing Origin: then you didn't set img.crossOrigin and the browser will not let you use the image even if the response said it was ok.
If your request has the Origin: header and the response does not have the other headers than that server did not give permission to use the image to display it. In other words it will work in an image tag and you can draw it to a canvas but you can't use it in WebGL and any 2d canvas you draw it into will become tainted and toDataURL and getImageData will stop working
this is a classic crossdomain issue that happens when you're developing locally.
I use python's simple server as a quick fix for this.
navigate to your directory in the terminal, then type:
$ python -m SimpleHTTPServer
and you'll get
Serving HTTP on 0.0.0.0 port 8000 ...
so go to 0.0.0.0:8000/ and you should see the problem resolved.
You can base64 encode your texture.
I want to record my webcam and broadcast the stream in live to other clients.
I can easily show my webcam in a video tag in the same page with something like that:
function initialize() {
video = $("#v")[0];
width = video.width;
height = video.height;
var canvas = $("#c")[0];
context = canvas.getContext("2d");
nav.getUserMedia({video: true}, startStream, function () {});
}
function startStream(stream) {
video.src = URL.createObjectURL(stream);
video.play();
requestAnimationFrame(draw);
}
function draw() {
var frame = readFrame();
if (frame) {
replaceGreen(frame.data);
context.putImageData(frame, 0, 0);
}
requestAnimationFrame(draw);
}
function readFrame() {
try {
context.drawImage(video, 0, 0, width, height);
} catch (e) {
return null;
}
return context.getImageData(0, 0, width, height);
}
But how to send that stream to a server, do some image processing and then brodcast to other clients?
Is nodejs the best way? Do you have some readings/libs to recommend me?
There are currently several working drafts on this but I think currently the best way to broadcast a live video in a browser is to use Flash ActionScript. Of course, the clients will still need to install a plugin to run the app in the browser. Note that you can't use the Apache web server to broadcast the live videos in that case. You will need a media server such as Red5(free), Wowza, or Adobe Media Server.
Also, see if you can find some help here:
'WebRTC (where RTC stands for Real-Time Communications) is a technology that enables audio/video streaming and data sharing between browser clients (peers). As a set of standards, WebRTC provides any browser with the ability to share application data and perform teleconferencing peer to peer, without the need to install plug-ins or third-party software. '