I am attempting to get the MPEG-2 Decoder (aka DTV-DVD Video Decoder) to give me progressive YV12 or NV12 frames that can be uploaded to OpenGL for rendering. But what I'm seeing rendered looks like some form of uncompressed adaptive motion interlacing or else just B or P frames that don't give the full image. (The code that renders the YV12/NV12 in OpenGL works well with other sources, so that's not the problem.)
One important clue: I see one perfectly rendered frame when the movie starts and whenver it loops back to the beginning. This tells me that's the only time I'm getting a full frame of valid YV12/NV12 data.
Shortest description possible:
1) Created a custom Sample Grabber (based on CTransInPlaceFilter) so that I could get samples that have a VIDEOINFOHEADER2. This works as expected, and the sample sizes match expectations for YV12/NV12 at resolution I'm playing. (Helpful example of rolling your own Sample Graabber here.)
2) To ensure I only get progressive frames, the CheckInputType() method of my Sample Grabber to return E_FAIL if the dwInterlaceFlags field of the VIDEOINFOHEADER2 has the AMINTERLACE_IsInterlaced flag set.
3) I am setting the eAVDecVideoSoftwareDeinterlaceMode_ProgressiveDeinterlacing flag on the decoder using the ICodecAPI interface with CODECAPI_AVDecVideoSoftwareDeinterlaceMode. (If I don't do this, the decoder won't connect to my Sample Grabber because it doesn't accept interlaced frames.)
4) To debug this, I'm using the IMediaSample2 interface to get the properties of the incoming media samples in the Sample Grabber. The dwTypeSpecificFlags member of the AM_SAMPLE2_PROPERTIES struct tells me that the frames are AM_VIDEO_FLAG_INTERLEAVED_FRAME, which I believe indicates I'm getting a full frame instead of a single field. The AM_VIDEO_FLAG_I_SAMPLE bit is also set, for all frames, indicating that I'm getting full "I" frames and not "B" or "P" frames.
5) Given that all frames are "I" frames, I'd expect to see my image instead of gobbledygook as shown above. As mentioned above, the only time I see a valid image when the movie loops back around to the first frame.
6) Last thing: I do see that my samples have the AM_VIDEO_FLAG_WEAVE set. Is this "weaving" of the image the problem?
Thanks,
Mark
Related
How to extract the image from this https://www.google.com/maps/#45.8118462,15.9725486,3a,75y/data=!3m7!1e2!3m5!1sAF1QipOH6lgU7bug2ndyW-9-Uq0kgKqcKDtnGei2N5Qo!2e10!6shttps:%2F%2Flh5.googleusercontent.com%2Fp%2FAF1QipOH6lgU7bug2ndyW-9-Uq0kgKqcKDtnGei2N5Qo%3Dw150-h150-k-no-p!7i3024!8i4032
(If the link disappears let me describe how to reproduce the question. Find any shop on Google Maps that has the "shop title image" appearing in the shop details on the left side when you click on that shop. Click on that image to expand it across the whole viewport.)
I found the <canvas> element that I guess contains the image. I tried to do .getContext('2d') on that canvas element, but I keep getting null.
If you are getting null when doing getContext("2d") it's because an other type of context was created already, in this case, a "webgl" one.
To convert that canvas to a new image, you'd normally call canvas.toBlob() (whatever the context type).
And if you need to crop that canvas content, you'd draw it on an other canvas.
But since they did not prevent the WebGL context to throw away its drawing buffer (by passing preserveDrawingBuffer in the getContext call), you'll only get a transparent image back from it.
Anyway none of these methods will retrieve the original image, but they will create a new image entirely (probably of lesser quality, and bigger in size). To retrieve the original image, check the network tab of your dev tools, or if you need to do it programmatically, inject a script that will spoof all fetch, XHR and HTMLImageElement objects in order to log their resource URL. But that becomes dirty.
I have a app where I play different code-generated sounds. I place these sounds in a AudioBufferSourceNode.
I allow the the user to choose what output device to play the sound through, so I use a MediaStreamAudioDestinationNode with its stream used as the source for an Audio Element. This way when the user chooses an audio output to play the sound to, I set the Sink Id of the Audio element to the requested audio output.
So I have AudioBufferSourceNode -> some Audio Graph (gain nodes, etc) -> MediaStreamAudioDestinationNode -> Audio element.
When I Play the first sound, it sound fine. But when I create a new source and connect it to the same MediaStreamAudioDestinationNode, the sound is played with the wrong pitch.
I created a Fiddle that shows the problem.
Is this a bug, or am I doing something wrong?
The problem was identified based on the OP Chrome Ticket.
It seems to come from the lack of sync between AudioElement and its source AudioNode (AudioBufferSourceNode, OscillatorNode, etc.) when you pause the source and play it back again.
The solution is to always call AudioElement.pause() and AudioElement.start() alongside your source stop and start.
https://jsfiddle.net/k1r7o0xj/3/
It's possible to dynamically change your graph layout by using .connect() and .disconnect(), even when audio is playing or sent through a stream (which could even be streamed over WebRTC).
I couldn't find a reference in the spec, so I'm pretty sure this is taken for granted.
For example, if you have two AudioBufferSourceNodes bufferSource1 and bufferSource2, and a MediaStreamAudioDestinationNode streamDestination:
bufferSource1.connect(streamDestination);
//do some other things here, and after some time, switch to bufferSource2:
//(streamDestination doesn't need to be explicitly specified here)
bufferSource1.disconnect(streamDestination);
bufferSource2.connect(streamDestination);
Example in action.
Edit 1:
Proper implementation:
According to the Editors Draft on the Audio Output API, it is planned/will be possible to choose a custom audio output device for the AudioContext as well (by means of new AudioContext({ sinkId: requestedSinkId });). I couldn't find any info on the progress, and even found a related discussion which the asker apparently read already. According to this and (many) other references, it doesn't seem te be an easy task, but it's planned for WA V1.
Edit:
That section has been removed from the API Draft, but you can still find it in an older version.
Current workaround:
I played around with your workaround (using a MediaStreamAudioDestinationNode and Audio object), and it seems to be related to nothing being connected. I modified my example to toggle a single buffer (similar to your example but with an AudioBufferSourceNode), and observed a similar frequency drop. However, when using a GainNode inbetween and setting it's gain.value to either 0 or 1, the frequency drops disappeared (this isn't gonna be the solution if you want to create and connect new AudioBuffers dynamically).
I'm looking at options to switch from flash (strobe) to HTML5 solution (using Media Source Extensions with DASH or HLS).
According to the HTML5 specs for video we can't get the duration of a live stream video.
The duration attribute must return the time of the end of the media resource,
in seconds, on the media timeline. If no media data is available, then the
attributes must return the Not-a-Number (NaN) value. If the media resource is
known to be unbounded (e.g. a streaming radio), then the attribute must return
the positive Infinity value.
My live stream is not a "sliding window" meaning that we have a fixed start date. I am currently using Strobe player and it actually increase the duration as it plays whereas HTML5 always returns Infinity.
I wanted to know if some options are available to maintain myself a duration (by parsing fragments for example, this library does that in a way).
I don't have enough reputation to comment, so I'll type it here.
I think it's best to look into the .seekable and .buffered properties of the HTMLMediaElement. You can use the .buffered that returns a TimeRanges object to track the duration of the stream in your media but the media element itself has no means of knowing how long the stream might be.
The problem is that .buffered might not always tell you "how much stream" is there if you have pause for a long time, for example.
When I tested their behaviour on a Android virtual device and an HLS stream in Chrome, after several seconds of playback, buffered returned a TimeRanges object with length 1 and video.buffered.end(0) was 0, and .seekable returned the same thing but with video.seekable.end(0) == Infinity.
If you want precise data, I would agree that a parsing library, that parses the duration of a HLS playlist for example, is the best option, although not elegant at all.
I've been working a lot with AGAL vertex and fragment shaders. I've got individual objects lit correctly (including specular shading) but I'd like to have objects cast shadows on OTHER objects. I have looked online, but I think most people working directly with AGAL have built custom Stage3D libraries and the shadow-casting solution doesn't seem to be in the public domain. Anyone willing to change that?
I'd like to know how to get an object to cast a shadow on another. I can't post what I've tried, because I can't get my head around where to begin on this problem. How would you pass the information (whether other objects are blocking the light) into another object's shader?
Thanks.
IT's called Deferred shading, you have to do 2 pass of vertex and fragment shaders.
In the first pass you accumulate informations about distances, normals, occlusion...
In the second pass you render and apply the informations of the first pass to make shadows.
Another options is ShadowMapping:
Basic shadowmap
The basic shadowmap algorithm consists in two passes. First, the scene is rendered from the point of view of the light. Only the depth of each fragment is computed. Next, the scene is rendered as usual, but with an extra test to see it the current fragment is in the shadow.
The “being in the shadow” test is actually quite simple. If the current sample is further from the light than the shadowmap at the same point, this means that the scene contains an object that is closer to the light. In other words, the current fragment is in the shadow.
I have been playing around with a very simple HTML5/Canvas based drawing app, just for the browser. I am setting it up as a proof of concept to play around with different programmatic drawing effects and such. However, I running into an issue, namely a SecurityError, when I try to grab the ImageData from the canvas 2d context.
Now, I know all about the security issues with cross-browser content and running on a webserver with local content and all that. This is not my issue. I am using no images. And the weirder thing is that I can grab the ImageData in some places in my code, but not others. Specifically, not within a function, which is where I want to.
Here is a fork of my code:
http://jsfiddle.net/grimertop90/77Z42/
Feel free to play around, and try drawing on the canvas.
(Note: I have been developing in Chrome and it works fine, but I just tried running it in Firefox and it seems to have issues. For my purposes, please just check it out in Chrome.)
If you look though the JavaScript, you should see three points marked "Case 1", "Case 2", and "Case 3". These are the three points in my code where I have attempted to grab the canvas's ImageData. Cases 1 and 3 work; Case 2, the important one, does not. Case 1 is during initialization. Case 3 is upon a user's button click. Case 2, the broken one, is fired when a certain number of points have been drawn on the canvas.
I have no clue what the issue might be, especially since it work in 2 out of 3 places.
Your code works fine, your only problem are the missing arguments in the arcSketch call. Currently you have:
if (points.length >= 100) { arcSketch(); }
Just change it for
if (points.length >= 100) { arcSketch(x,y); }
You were trying to call ctx.getImageData(Nan, Nan, 200, 200) which was throwing the error.
You can see it working here