How to play music a little bit delayed with a real-time Advanced audio analyzer - html

Audio.AnalyserFreqBinCount("audio", 0)
My game is analyzing the audio levels so that music can affect the game behaviour.
How can I play audio a bit delayed, after it has been analyzed, now the analyzing occurs exactly same time when music is played.

Assuming you can access the Web Audio API via the framework, you could use the createDelay() to create a delay node. The delay is given in seconds.
Then simply:
Plug source into analyzer node as well as delay node.
Then connect delay node to destination
Process the data from analyzer node as usual.

Related

How to measure decoding performance when using MSE or WebRTC?

For the test I'm thinking of using WebSocket to push stream to the client, video encoded as Fragmented MP4. Then the client decodes the stream ASAP using (MediaSource)MSE and (MediaStream)WebRTC along with HTML5 <video> tag. This is just a test, for real use case I'm targeting real-time live stream.
Is there a way for me to measure the frame by frame decoding time? i.e. how long takes the decoder to decode a frame and renderer renders a frame? Alternatively how i can get the real-time FPS for that?
Probably the closest you can get is by watching the webkitDroppedFrameCount and webkitDecodedFrameCount properties of the HTMLVideoElement over time. (Note, this only works in Chrome.) This isn't really going to give you the time for decoded frames, but will help you measure related performance.
The time to decode one frame isn't really all that useful to you. It's going to be the same, regardless of where the data came from. It's also going to be different from frame to frame. What matters is that the decoder can keep up with the playback rate.
I should also point out that there's no reason to use web sockets if you're only sending data one direction. If you're just streaming video data to a client, use regular HTTP! You can stream the response with the Fetch API and skip the overhead of web sockets entirely.
You can check some useful matrix several ways during WebRTC using.
webrtc-internals(Chrome only)
If you try to WebRTC, you can check WebRTC internal.
After create peerConnection object, in the Address Bar on Chrome, try to type following.
chrome://webrtc-internals
WebRTC Internals Document
WebRTC Externals browser extension for other browser
Then you can check some useful matrix.
FPS
on Stats graphs for ssrc_****_recv (ssrc) (video)
You can check frame rate with like googFrameRateDecoded googFrameRateOutput googFrameRateReceived value.
Delay
on Stats graphs for ssrc_****_recv (ssrc) (video)
You can check delay with like googTargetDelayMs googRenderDelayMs googJitterBufferMs.
More about these matrix to real practice, check this out.
https://flashphoner.com/oh-webcam-online-broadcasting-latency-thou-art-a-heartless-bitch/
WebRTC Standard Stats
Also you can access stats by standard way from peerConnection object.
WebRTC Standard Stats
WebRTC Stats API
https://www.w3.org/TR/webrtc-stats/#dom-rtcreceivedrtpstreamstats
RTCReceivedRtpStreamStats - jitter
https://www.w3.org/TR/webrtc-stats/#dom-rtcvideoreceiverstats
RTCVideoReceiverStats - jitterBufferDelayed

Low latency (< 2s) live video streaming HTML5 solutions?

With Chrome disabling Flash by default very soon I need to start looking into flash/rtmp html5 replacement solutions.
Currently with Flash + RTMP I have a live video stream with < 1-2 second delay.
I've experimented with MPEG-DASH which seems to be the new industry standard for streaming but that came up short with 5 second delay being the best I could squeeze from it.
For context, I am trying to allow user's to control physical objects they can see on the stream, so anything above a couple of seconds of delay leads to a frustrating experience.
Are there any other techniques, or is there really no low latency html5 solutions for live streaming yet?
Technologies and Requirements
The only web-based technology set really geared toward low latency is WebRTC. It's built for video conferencing. Codecs are tuned for low latency over quality. Bitrates are usually variable, opting for a stable connection over quality.
However, you don't necessarily need this low latency optimization for all of your users. In fact, from what I can gather on your requirements, low latency for everyone will hurt the user experience. While your users in control of the robot definitely need low latency video so they can reasonably control it, the users not in control don't have this requirement and can instead opt for reliable higher quality video.
How to Set it Up
In-Control Users to Robot Connection
Users controlling the robot will load a page that utilizes some WebRTC components for connecting to the camera and control server. To facilitate WebRTC connections, you need some sort of STUN server. To get around NAT and other firewall restrictions, you may need a TURN server. Both of these are usually built into Node.js-based WebRTC frameworks.
The cam/control server will also need to connect via WebRTC. Honestly, the easiest way to do this is to make your controlling application somewhat web based. Since you're using Node.js already, check out NW.js or Electron. Both can take advantage of the WebRTC capabilities already built in WebKit, while still giving you the flexibility to do whatever you'd like with Node.js.
The in-control users and the cam/control server will make a peer-to-peer connection via WebRTC (or TURN server if required). From there, you'll want to open up a media channel as well as a data channel. The data side can be used to send your robot commands. The media channel will of course be used for the low latency video stream being sent back to the in-control users.
Again, it's important to note that the video that will be sent back will be optimized for latency, not quality. This sort of connection also ensures a fast response to your commands.
Video for Viewing Users
Users that are simply viewing the stream and not controlling the robot can use normal video distribution methods. It is actually very important for you to use an existing CDN and transcoding services, since you will have 10k-15k people watching the stream. With that many users, you're probably going to want your video in a couple different codecs, and certainly a whole array of bitrates. Distribution with DASH or HLS is easiest to work with at the moment, and frees you of Flash requirements.
You will probably also want to send your stream to social media services. This is another reason why it's important to start with a high quality HD stream. Those services will transcode your video again, reducing quality. If you start with good quality first, you'll end up with better quality in the end.
Metadata (chat, control signals, etc.)
It isn't clear from your requirements what sort of metadata you need, but for small message-based data, you can use a web socket library, such as Socket.IO. As you scale this up to a few instances, you can use pub/sub, such as Redis, to distribution messaging throughout the servers.
To synchronize the metadata to the video depends a bit on what's in that metadata and what the synchronization requirement is, specifically. Generally speaking, you can assume that there will be a reasonable but unpredictable delay between the source video and the clients. After all, you cannot control how long they will buffer. Each device is different, each connection variable. What you can assume is that playback will begin with the first segment the client downloads. In other words, if a client starts buffering a video and begins playing it 2 seconds later, the video is 2 seconds behind from when the first request was made.
Detecting when playback actually begins client-side is possible. Since the server knows the timestamp for which video was sent to the client, it can inform the client of its offset relative to the beginning of video playback. Since you'll probably be using DASH or HLS and you need to use MCE with AJAX to get the data anyway, you can use the response headers in the segment response to indicate the timestamp for the beginning the segment. The client can then synchronize itself. Let me break this down step-by-step:
Client starts receiving metadata messages from application server.
Client requests the first video segment from the CDN.
CDN server replies with video segment. In the response headers, the Date: header can indicate the exact date/time for the start of the segment.
Client reads the response Date: header (let's say 2016-06-01 20:31:00). Client continues buffering the segments.
Client starts buffering/playback as normal.
Playback starts. Client can detect this state change on the player and knows that 00:00:00 on the video player is actualy 2016-06-01 20:31:00.
Client displays metadata synchronized with the video, dropping any messages from previous times and buffering any for future times.
This should meet your needs and give you the flexibility to do whatever you need to with your video going forward.
Why not [magic-technology-here]?
When you choose low latency, you lose quality. Quality comes from available bandwidth. Bandwidth efficiency comes from being able to buffer and optimize entire sequences of images when encoding. If you wanted perfect quality (lossless for each image) you would need a ton (gigabites per viewer) of bandwidth. That's why we have these lossy codecs to begin with.
Since you don't actually need low latency for most of your viewers, it's better to optimize for quality for them.
For the 2 users out of 15,000 that do need low latency, we can optimize for low latency for them. They will get substandard video quality, but will be able to actively control a robot, which is awesome!
Always remember that the internet is a hostile place where nothing works quite as well as it should. System resources and bandwidth are constantly variable. That's actually why WebRTC auto-adjusts (as best as reasonable) to changing conditions.
Not all connections can keep up with low latency requirements. That's why every single low latency connection will experience drop-outs. The internet is packet-switched, not circuit-switched. There is no real dedicated bandwidth available.
Having a large buffer (a couple seconds) allows clients to survive momentary losses of connections. It's why CD players with anti-skip buffers were created, and sold very well. It's a far better user experience for those 15,000 users if the video works correctly. They don't have to know that they are 5-10 seconds behind the main stream, but they will definitely know if the video drops out every other second.
There are tradeoffs in every approach. I think what I have outlined here separates the concerns and gives you the best tradeoffs in each area. Please feel free to ask for clarification or ask follow-up questions in the comments.

as3 - using resources - better to preload sounds or load them on the fly

I have a loop that will be loading a new sound every minute. My questions is... I am wondering if it is better to load all 50 at once and play them when needed every minitue or is it better to load the sounds right before I play them?
What is the best way to save resources?
This is for a cell phone app so I need to use all resources very carefully.
It's a good practice to use resources on demand (load sounds when user needs them), especially in resource restricted environments like mobile devices. Every sound object will consume memory. So answer will be: load sound on demand, after playback, unload it by setting reference on sound object to null, load and play another one. If you loading them, from the network, You should load several sounds, for example current and next one, because for mobile devices, network process will turn on radio, and switching radio is battery consuming process.
Network sound loading: load several on demand (for example current and next one)
Filesystem loading: load only current on demand.

Avoid pausing flash game with cheat-engine by pausing the browser process

For a flash game i'm working on i would like to know if there is anything that can be done to stop players that pause the game by pausing the browser process as shown in this youtube video?
I'm already looking in to all the other great suggestions to avoid or at least slow down cheat engine hackers at : What is the best way to stop people hacking the PHP-based highscore table of a Flash game. But couldn't find anything on pausing processes.
Pausing a process is detectable by using (new Date()).getTime() repeatedly with short intervals. If, say, the difference between the dates exceeds several seconds, and you are querying this statement once per 30 frames, then either your framerate stalls or the process has been paused.

My Sound object is using to much memory and causing my application to crash. How can I empty the first half of the objects data?

I'm currently working on a dynamic MP3 player in AS3. The player will also support continuous (in length) radio streams.
Because my player will include a 'seek' bar, I allow the user to seek through the Sound object's data. Now I know that with a continuous stream, data being stored on the users RAM will never stop, as downloading will never stop on a continuous stream. This means, after a few hours of streaming, allot of RAM is being used by my app. I've tested the app on my own machine, running a very high spec, and the app crashes in my browser. When i say the app crashes, I mean the whole of Flash, meaning I have to restart my browser in order to use Flash again. I know my app is the cause as Flash has never crashed in the past. It only does it when my app has been streaming for 2+ hours.
So what I want to do is only allow the user to cache up to an hours worth of audio. After an hour, I want to clear the first half of the sound objects data, meaning that only the most recent half hours audio is stored and available for seeking.
So I have my stream:
var soundObj:Sound = new Sound();
soundObj.load(new URLRequest('stream.mp3'));
//ect ect
and sound is where the data is stored. So my question: How would I clear the first 30 mins of audio from that object?
Perhaps the Sound class is not meant to reliably play "unlimited" MP3 files, which seems to be your case. It is made to play normal MP3 "songs". Two hours of MP3 sound can easily accumulate to be larger than 200 megabytes of data.
But there is a good solution - use NetConnection and NetStream classes to stream audio instead. There are many tutorials out there. You will also be able to stream your MP3s, just a bit differently - a central server will be involved, which will transcode these MP3s on the fly, delivering it to you in a true "streaming" manner. One of such servers is Adobe Flash Media Server, an overpriced piece of work from Adobe. A lot of free and open-source alternatives exist which will work fine for your purposes - Red5, nginx-rtmp to name a few, that I have tested myself.