Can I make a spectral noise gate with the web audio API? - html5-audio

Is there a way to specify using the HTML5 web audio API a node that significantly reduces (or even silences) noise that falls within a certain frequency range and that falls under a certain amplitude?
I've noticed biquadfilters and other filters, but they don't seem to consider amplitude of the noise.

Related

Spritestage and multiple spritesheets

My team is currently working on a rather large Web application. We have switched from the flash platform to Html5 in hope for a one size fits all platform.
The UI, is mainly based on createjs, which I by the way really enjoy working with.
However we have now reached the maturity phase and started optimizing some of the animations, that doesn't run smoothly in especially IE.
The thing is that we have a around 1500 sprites (pngs & jpgs) which is drawn onto a stage. We only draw around 60 of them per frame.
They are rather large (up to 800x800 pixels), and the application engine can choose which 60 to show more or less randomly.
The images are packed in a zip file and unpacked in the browser and Html images are constructed by converting the binary data to a base64 encoded string, which is passed to the src property of an image.
So in each frame render a set of around
60 images are drawn to the stage. And this is for some reason slow.
I have therefore used some time to experiment with the Spritestage of createjs to take advantage of Webgl, but with only small improvements.
So now I'm considering to pack our sprites in a spritesheet, which results in many sheets because of the large amount of data.
My question is therefore:
Would spritestage gain any improvements if my sprites are spread across multiple sheets? According to the documentation only spritesheets with a single image are supported.
Best regards
/Mikkel Rasmussen
In general, spritesheets speed up rendering by decrease the number of draw call required by frame. Instead of say, using a different texture and render for every sprite, spritesheet implementations can draw multiple sprites with one render call, provided that the sprite sheet contains all the different individual sprite images. So to answer your question, you are unlikely to see performance gains if the sprites you want to draw are scattered on different sprite sheets.
Draw calls have significant overhead and it is generally a good idea to minimize them. 1500 individual draw calls would be pretty slow.
I dont know if it this is applicable to your situation but it is possible your bottleneck is not the number of draw calls you dispatch to GPU but you are doing too much overdraw since you mention your sprites are 800x800 each. If that is the case, try to render front to back with the depth test or stencil test turned on.

Cesium Resampling

I know that Cesium offers several different interpolation methods, including linear (or bilinear in 2D), Hermite, and Lagrange. One can use these methods to resample sets of points and/or create curves that approximate sampled points, etc.
However, the question I have is what method does Cesium use internally when it is rendering a 3D scene and the user is zooming/panning all over the place? This is not a case where the programmer has access to the raster, etc, so one can't just get in the middle of it all and call the interpolation functions directly. Cesium is doing its own thing as quickly as it can in response to user control.
My hunch is that the default is bilinear, but I don't know that nor can I find any documentation that explicitly says what is used. Further, is there a way I can force Cesium to use a specific resampling method during these activities, such as Lagrange resampling? That, in fact, is what I need to do: force Cesium to employ Lagrange resampling during scene rendering. Any suggestions would be appreciated.
EDIT: Here's a more detailed description of the problem…
Suppose I use Cesium to set up a 3-D model of the Earth including a greyscale image chip at its proper location on the model Earth's surface, and then I display the results in a Cesium window. If the view point is far enough from the Earth's surface, then the number of pixels displayed in the image chip part of the window will be fewer than the actual number of pixels that are available in the image chip source. Some downsampling will occur. Likewise, if the user zooms in repeatedly, there will come a point at which there are more pixels displayed across the image chip than the actual number of pixels in the image chip source. Some upsampling will occur. In general, every time Cesium draws a frame that includes a pixel data source there is resampling happening. It could be nearest neighbor (doubt it), linear (probably), cubic, Lagrange, Hermite, or any one of a number of different resampling techniques. At my company, we are using Cesium as part of a large government program which requires the use of Lagrange resampling to ensure image quality. (The NGA has deemed that best for its programs and analyst tools, and they have made it a compliance requirement. So we have no choice.)
So here's the problem: while the user is interacting with the model, for instance zooming in, the drawing process is not in the programmer's control. The resampling is either happening in the Cesium layer itself (hopefully) or in even still lower layers (for instance, the WebGL functions that Cesium may be relying on). So I have no clue which technique is used for this resampling. Worse, if that technique is not Lagrange, then I don't have any clue how to change it.
So the question(s) would be this: is Cesium doing the resampling explicitly? If so, then what technique is it using? If not, then what drawing packages and functions are Cesium relying on to render an image file onto the map? (I can try to dig down and determine what techniques those layers may be using, and/or have available.)
UPDATE: Wow, my original answer was a total misunderstanding of your question, so I've rewritten from scratch.
With the new edits, it's clear your question is about how images are resampled for the screen while rendering. These
images are texturemaps, in WebGL, and the process of getting them to the screen quickly is implemented in hardware,
on the graphics card itself. Software on the CPU is not performant enough to map individual pixels to the screen
one at a time, which is why we have hardware-accelerated 3D cards.
Now for the bad news: This hardware supports nearest neighbor, linear, and mapmapping. That's it. 3D graphics
cards do not use any fancier interpolation, as it needs to be done in a fraction of a second to keep frame rate as high as possible.
Mapmapping is described well by #gman in his article WebGL 3D Textures. It's
a long article but search for the word "mipmap" and skip ahead to his description of that. Basically a single image is reduced
into smaller images prior to rendering, so an appropriately-sized starting point can be chosen at render time. But there will
always be a final mapping to the screen, and as you can see, the choices are NEAREST or LINEAR.
Quoting #gman's article here:
You can choose what WebGL does by setting the texture filtering for each texture. There are 6 modes
NEAREST = choose 1 pixel from the biggest mip
LINEAR = choose 4 pixels from the biggest mip and blend them
NEAREST_MIPMAP_NEAREST = choose the best mip, then pick one pixel from that mip
LINEAR_MIPMAP_NEAREST = choose the best mip, then blend 4 pixels from that mip
NEAREST_MIPMAP_LINEAR = choose the best 2 mips, choose 1 pixel from each, blend them
LINEAR_MIPMAP_LINEAR = choose the best 2 mips. choose 4 pixels from each, blend them
I guess the best news I can give you is that Cesium uses the best of those, LINEAR_MIPMAP_LINEAR to
do its own rendering. If you have a strict requirement for more time-consuming imagery interpolation, that means you
have a requirement to not use a realtime 3D hardware-accelerated graphics card, as there is no way to do Lagrange image interpolation during a realtime render.

How to measure complete all-in performance of DOM changes?

I've found lots of information about measuring load time for pages and quite a bit about profiling FPS performance of interactive applications, but this is something slightly different than that.
Say I have a chart rendered in SVG and every click I make causes the chart to render slightly differently. I want to get a sense of the complete time elapsed between the click and the point in time that the pixels on the screen actually change. Is there a way to do this?
Measuring the Javascript time is straight forward but that doesn't take into consideration any of the time the browser spends doing any layout, flow, paint, etc.
I know that Chrome timeline view shows a ton of good information about this, which is great for digging into issues but not so great for taking measurements because the tool itself affects performance and, more importantly, it's Chrome only. I was hoping there was a browser independent technique that might work. Something akin to how the Navigation Performance API works for page load times.
you may consider using capturing hdmi capturing hardware (just google for it) or a high speed camera to create a video, which could be analyzed offline.
http://www.webpagetest.org/ supports capturing using software only, but I guess it would be too slow for what you want to measure.

Maximum number of canvases (used as layers)?

I am writing an HTML5 canvas app in javascript. I am using multiple canvas elements as layers to support animation without having to re-draw the whole image every frame.
Is there a maximum number of canvas elements that I can layer on top of each other in this way -- (and see an appropriate result on all of the HTML5 platforms, of course).
Thank you.
I imagine you will probably hit a practical performance ceiling long before you hit the hard specified limit of somewhere between several thousand and 2,147,483,647 ... depending on the browser and what you're measuring (number of physical elements allowed on the DOM or the maximum allowable z-index).
This is correlated to another of my favorite answers to pretty much any question that involves the phrase "maximum number" - if you have to ask, you're probably Doing It Wrong™. Taking an approach that is aligned with the intended design is almost always just as possible, and avoids these unpleasant murky questions like "will my user's iPhone melt if I try to render 32,768 canvas elements stacked on top of each other?"
This is a question of the limits of the DOM, which are large. I expect you will hit a performance bottleneck before you hit a hard limit.
The key in your situation, I would say, is to prepare some simple benchmarks/tests that dynamically generate Canvases (of arbitrary number), fill them with content, and add them to the DOM. You should be able to construct your tests in such a way where A) if there is a hard limit you will spot it (using identifiable canvas content or exception handling), or B) if there is a performance limit you will spot it (using profiling or timers). Then perform these tests on a variety of browsers to establish your practical "limit".
There are also great resources available here https://developers.facebook.com/html5/build/games/ from the Facebook HTML5 games initiative. Therein are links to articles and open source benchmarking tools that address and test different strategies similar to yours.

Non-linear volume for HTMLMediaElement

I've written my own media player interface using javascript and html5. Currently my volume slider maps to the browser's volume attribute 1:1. I'd quite like to adjust this to account for perceived loudness.
The volume attribute section of the html5 specs say:
... 0.0 being silent, and 1.0 being the loudest setting, values in between increasing in loudness. The range need not be linear.
This seems to imply that there's no standard for what scale browsers should use. I'm worried that if I adjust for perceived loudness in one browser, another browser might already be doing this resulting in an overcorrection.
Does anyone know what volume scales browsers currently use and whether these are likely to change in future?
Assuming you cannot acquire directly the information on each browser, I would suggest developing a set of empirical tests. Off-hand, I can't imagine the browser vendors using anything other than logarithmic or linear volume control, so that leaves only two outcomes to consider in your tests. Once your test flow is created, you can reuse it whenever a new browser version is released.
As for the tests themselves, they could be by your own perception (test the loudness at 100% vs. 50%, and guage whether 50% actually sounds half as loud, or only 75%-ish as loud); or they could be through recording the "what you hear" channel on your soundcard, and analyzing the waveform in a custom app or tool, this time looking for a .5 drop in (peak) amplititude if simple linear, or a greater than .5 drop if logarithmic. If building your own analysis tool, PCM waveform data is not too difficult to work with, assuming you are comfortable with C/C++/C#/et.al.