AMS doesn't receive unpublish command SOMETIMES over rtmpt - actionscript-3

This one has had me going for a week at least. I am trying to record a video file to AMS. It works great almost all of the time, except about 1 in 10 or 15 recording sessions, I never receive 'NetStream.Unpublish.Success' on my netstream from AMS when I close the stream. I am connecting to AMS using rtmpt when this happens, it seems to work fine over rtmp. Also, it seems like this only happens in safari on mac, but since its so intermittent I don't really trust that. Here is my basic flow:
// just a way to use promises with netStatusEvents
private function netListener(code:String, netObject:*):Promise {
var deferred:Deferred = new Deferred();
var netStatusHandler:Function = function (event:NetStatusEvent):void {
if (event.info.level == 'error') {
deferred.reject(event);
} else if (event.info.code == code) {
deferred.resolve(netObject);
// we want this to be a one time listener since the connection can swap between record/playback
netObject.removeEventListener(NetStatusEvent.NET_STATUS, netStatusHandler);
}
};
netObject.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler);
return deferred.promise;
}
// set up for recording
private function initRecord():void {
Settings.recordFile = Settings.uniquePrefix + (new Date()).getTime();
// detach any existing NetStream from the video
_view.video.attachNetStream(null);
// dispose of existing NetStream
if (_videoStream) {
_videoStream.dispose();
_videoStream = null;
}
// disconnect before connecting anew
(_nc.connected ? netListener('NetConnection.Connect.Closed', _nc) : Promise.when(_nc))
.then(function (nc:NetConnection):void {
netListener('NetConnection.Connect.Success', _nc)
.then(function (nc:NetConnection):void {
_view.video.attachCamera(_webcam);
// get new NetStream
_videoStream = getNetStream(_nc);
ExternalInterface.call("CTplayer." + Settings.instanceName + ".onRecordReady", true);
}, function(error:NetStatusEvent):void {
ExternalInterface.call("CTplayer." + Settings.instanceName + ".onError", error.info);
});
_nc.connect(Settings.recordServer);
}); // end ncClose
if (_nc.connected) _nc.close();
}
// stop recording
private function stop():void {
netListener('NetStream.Unpublish.Success', _videoStream)
.then(function (ns:NetStream):void {
ExternalInterface.call("CTplayer." + Settings.instanceName + ".onRecordStop", Settings.recordFile);
});
_videoStream.attachCamera(null);
_videoStream.attachAudio(null);
_videoStream.close();
}
// start recording
private function record():void {
netListener('NetStream.Publish.Start', _videoStream)
.then(function (ns:NetStream):void {
ExternalInterface.call("CTplayer." + Settings.instanceName + ".onRecording");
});
_videoStream.attachCamera(_webcam);
_videoStream.attachAudio(_microphone);
_videoStream.publish(Settings.recordFile, "record"); // fires NetStream.Publish.Success
}
Update
I am now using a new NetConnection per connection attempt and also not forcing port 80 (see my 'answer' below). This has not solved my connection woes, only made the instances more infrequent. Now like every week or so I still have some random failure of ams or flash. Most recently someone made a recording and then flash player was unable to load the video for playback. The ams logs show a connection attempt and then nothing. There should at least be a play event logged for when i load the metadata. This is quite frustrating and impossible to debug.

I would try 2 distinct NetConnection objects, one for record and one for replay. This will remove your complexities around listeners adding/removing and connect/reconnect/disconnect logic and would IMO be cleaner.
NetConnections are cheap, and I've always used one per task at hand. The other advantage is that you can connect both at startup so the replay connection is ready instantly.
I've not seen a Promise used here before, but I'm not qualified to comment if that may cause a problem or not.

I think my issue was connecting over port 80. I originally thought I had to use port 80 with rtmpt, so I set my Settings.recordServer variable to rtmpt://myamsserver.net:80/app. I'm now using a shotgun approach where I try a bunch of port/protocol combos at once and pick the first one to connect. It is almost always picking port 443 over rtmpt, which seems much faster and more stable all around than 80, and I haven't had this issue since. It could also be due to not reusing the same NetConnection object like Stefan suggested, its hard to say.

Related

Audio distortion occurs when using AudioWorkletProcessor with a MediaStream source and connecting a bluetooth device while it is already running

In our project, we use AudioContext to wire up input from a microphone to an AudioWorkletProcessor and out to a MediaStream. Ultimately, this is sent to other peers in a WebRTC call.
If someone loads the page, the audio always sounds fine. But if they connect with a hard-wired microphone like a laptop mic or webcam, then connect a bluetooth device (such as airpods or headphones), then the audio becomes distorted & robotic sounding.
If we tear out all the other code and simplify it, we still have the issue.
bypassProcessor.js
// Basic processor that wires input to output without transforming the data
// https://github.com/GoogleChromeLabs/web-audio-samples/blob/main/audio-worklet/basic/hello-audio-worklet/bypass-processor.js
class BypassProcessor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
output[channel].set(input[channel]);
}
return true;
}
}
registerProcessor('bypass-processor', BypassProcessor);
main.js
const microphoneStream = await navigator.mediaDevices.getUserMedia({
audio: true, // have also tried { channelCount: 1 } and { channelCount: { exact: 1 } }
video: false
})
const audioCtx = new AudioContext()
const inputNode = audioCtx.createMediaStreamSource(microphoneStream)
await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
inputNode.connect(processorNode).connect(audioCtx.destination)
Interestingly, I have found if you comment out the 2 audio worklet lines and instead create a simple gain node, then it works fine.
// await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
// const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
const gainNode = audioCtx.createGain()
Also if you simply create the AudioWorkletNode, but don't even connect it to the others, this also reproduces the issue.
I've created a small React app here that reproduces the problem: https://github.com/JacobMuchow/audio_distortion_repro/tree/master
I've tried some options such as detecting when this happens using 'ondevicechange' event, closing the old AudioContext & nodes and recreating everything, but this only works some of the time. If I wait for some time and then recreate it again, it works so I'm worried about some type of garbage collection issue with the processor when attempting this, but that might be beside the point.
I suspect this has something to do with sample rates... when the AudioContext is correctly recreated it switches from 48 kHz to 16 kHz and then it sounds find. But sometimes it is recreated with 48 kHz still and it continues to sound robotic.
Threads on the internet concerning this are incredibly sparse and I'm hoping someone has specific experience with this issue or this API and can point out what I need to do differently.
For Chrome, the problem is very likely https://crbug.com/1090441 that was recently fixed. I think Firefox doesn't have this problem but I didn't check.

Firebase functions not listening for specific path

I've a path in database /business/{businessId}/images/{imageId}/imageUrl
It listens for business when a business in created, but later I'm pushing images in images, but it doesn't listen for businessImages. I used below listeners.
exports.business = functions.database.ref('/business/{businessId}').onWrite(event => {
console.log('lorem epsum****', event);
return true;
});
exports.businessImages = functions.database.ref('/business/{businessId}/images/{imageId}/imageUrl').onWrite(event => {
console.log('lorem epsum****', event);
return true;
});
What could be the issue?
Thank you every one for the answers, but the main issue was, i was inserting the whole base 64 images in the imageUrl, resulting in slowest performance for listening or getting data for imageUrl. When i removed all the images, it worked great.
P.S. I know we should put URLs instead of base 64 images.

Chrome doesn't use cache after power loss?

I am creating a digital signage player that uses Chrome as it's display engine. We need to be able to still muddle along if the network goes down without too much interruption.
Chrome works fine caching images, and I've set the "Exipres" header to be a month after access. I can set the player computer offline and have the app run for days with no problem. If I reboot the machine the right way (Start->Shut Down), caching still works as expected.
The issue is that when Chrome exits abnormally - Either a crash or power loss - on reboot, Chrome ignores the cache and refuses to load images. This happens if I cut power 5 minutes after it loads the page, so content is not expiring.
My guess is that Chrome is set to ignore the cache after an abnormal exit to prevent corrupted cache from continually crashing the browser. However, this behavior is not what I need.
Does anyone know of a command line arg or flag I can set to keep this from happening?
Thanks for your help.
I tried everything I could think of to make Chrome not invalidate the local cache on system failure, and came up empty. There's a few other people who had the same question, and I didn't see an answer.
Here's what I did that made this work, and if someone else is having the same problem, it might be the workaround that you need.
I added a service worker that would cache images. The code below isn't perfect yet, but should be a starting place for someone... (FYI, I learned this 5 minutes ago, so if someone wants to give me a pointer or two on how to make this more elegant, I'm all ears.)
We cache anything that has a response type of "cors" so we cache only images coming from the remote server. Note that your images must be loaded via https for this to work.
Taken (mostly) from: https://developers.google.com/web/fundamentals/getting-started/primers/service-workers
var CACHE_NAME = 'shine_cache';
var urlsToCache = [
'/'
];
self.addEventListener('install', function(event) {
// Perform install steps
event.waitUntil(
caches.open(CACHE_NAME)
.then(function(cache) {
console.log('Opened cache');
return cache.addAll(urlsToCache);
})
);
});
self.addEventListener('fetch', function(event) {
//console.log('Handling fetch event for', event.request);
if (event.request.method == 'POST') {
//console.log("Skipping POST");
event.respondWith(fetch(event.request));
return;
}
if (event.request.headers.get('Accept').indexOf('image') !== -1) {
event.respondWith(
caches.match(event.request)
.then(function(response) {
// Cache hit - return response
if (response) {
console.log("Returning from cache.", event.request);
return response;
}
// IMPORTANT: Clone the request. A request is a stream and
// can only be consumed once. Since we are consuming this
// once by cache and once by the browser for fetch, we need
// to clone the response.
var fetchRequest = event.request.clone();
return fetch(fetchRequest).then(
function(response) {
console.log("Have a response.", response);
// Check if we received a valid response
if(!response || response.status !== 200 || response.type !== 'cors') {
return response;
}
// IMPORTANT: Clone the response. A response is a stream
// and because we want the browser to consume the response
// as well as the cache consuming the response, we need
// to clone it so we have two streams.
var responseToCache = response.clone();
caches.open(CACHE_NAME)
.then(function(cache) {
console.log("Caching response", event.request);
cache.put(event.request, responseToCache);
});
return response;
}
);
})
);
}
});

How to keep event handlers alive in Actionscript 3

I have a Flex app that connects to a JBoss/MS-SQL back-end. Some of our customers have a proxy server in front of their JBoss with a timeout of 90 seconds. In our application there are searches that can take up to 2-3 minutes for complex criteria. Since the proxy isn't smart enough to recognize AMF's keep alive pings for what they are the proxy sends a 503 to the client, which in Flex land becomes a "Channel Call Failed" event. In searching SO and other places, this seems to be a common problem. We can't do anything about the proxy or lengthen the timeout, the application needs to handle it.
Of course the back-end continues to process and eventually ships the results to the client. But the user gets an ugly error message and assumes the app is broke.
The solution I have settled on is to consume the CCF error and have the client continue to wait. I have managed the first part, but I can't figure out how to keep the client's handlers active to receive the data (and/or consume another timeout if necessary).
Current error handler:
private function handleSearchError(event : FaultEvent) : void {
if (event.fault.faultCode == "Channel.Call.Failed") {
event.stopImmediatePropagation(); // doesn't seem to help
return;
}
if (searchProgress != null) {
PopUpManager.removePopUp(searchProgress);
searchProgress = null;
}
etc...
}
This is the setup:
<mx:Button id="btnSearch" label="
{resourceManager.getString('recon_perspective',
'ReconPerspective.ReconView.search')}" icon="{iconSearch}"
click="handleSearch()" includeIn="search, default"/>
And:
<mx:method name="search" result="event.token.resultHandler(event);"
fault="handleSearchError(event);"/>
Kicking off the call:
var token : AsyncToken = null;
token = sMSrv.search(searchType.toString(), getSearchMode(), criteria,
smartMatchParent.isArchiveMode);
searchProgress = LoadProgress(PopUpManager.createPopUp
(FlexGlobals.topLevelApplication as DisplayObject, LoadProgress, true));
searchProgress.title = resourceManager.getString('matching', 'smartmatch.loading.trans');
searchProgress.token = token;
searchProgress.showCancelButton = true;
PopUpManager.centerPopUp(searchProgress);
token.resultHandler = handleSearchResults;
token.cancelSearch = false;
So my question is how do I keep handleSearch and handleSearchError alive to consume the events from the server?
I verified that the data comes back from the server using WebDeveloper in the browser to watch the network traffic and if you cause the app to refresh that screen, the data gets displayed.
I'm very in experienced but would this help?
private function handleSearchError(event : FaultEvent) : void {
if (event.fault.faultCode == "Channel.Call.Failed") {
event.stopImmediatePropagation(); // doesn't seem to help
if(event.isImmediatePropagationStopped(true)) {
//After stopped do something here?
}
return;
}
if (searchProgress != null) {
PopUpManager.removePopUp(searchProgress);
searchProgress = null;
}
etc...
}

Replay Ping back AS3

I am just making IRC in actionscript 3, but now i have a little problem with Socket. Connections are fine, but i get disconnect when i don't replay ping back, so my question is how can i create pong in AS3? I did search for some tutorials, but i cant find all and some explains isnt fine to understand. If anyone can help me on good way.
Thanks!
So far as i am:
var servername:String = "irc.example.com";
var portnumber:int = 6667;
var _sock:Socket = new Socket();
_sock.addEventListener(Event.CONNECT, onConnect);
_sock.addEventListener(ProgressEvent.SOCKET_DATA, onSocketData);
_sock.connect(servername, portnumber);
function onConnect(evt:Event):void {
tServerInfo.text = "Verbinden met " + servername;
}
function onSocketData(event:ProgressEvent):void {
var socketdata:String;
while(_sock.bytesAvailable) {
socketdata = _sock.readUTFBytes(_sock.bytesAvailable);
tServerInfo.text = socketdata;
}
}
Ping, or keep-alive messages aren't anything special; they're just normal messages, sent on a schedule. You just need to set up a Timer to send a ping message (can be anything) every, say 15 seconds or so, in order to keep the Socket open, otherwise it'll close down as it's not being used.
You also need a small bit of code to ignore these messages when you're reading them, but it's trivial.