Is there a way to register code to be run when Flash is about to close (e.g., when the user closes the browser or when DOM manipulation causes the embedded player to be removed)?
In particular, I'd like for my application to send a closing packet to a remote service so the user's peers know that the user has no chance of coming back without having to wait for a timeout. I'm using URLLoader and URLRequest to maintain a BOSH connection, so I welcome solutions applicable to this specific case. However, if there are NetConnection-specific solutions, I'm sure I can learn from them too.
I'm happy to accept that this callback won't be run on a kill -9 but it would be nice to have the more graceful exit paths allow for some code execution.
It seems like the better solution would be to do this via the server side no? The server should be able to detect the disconnection, where you could then invalidate the session.
However, you could go with a client/socket based solution, albeit much more overhead. Using FMS or some other rtm real time server you could dispatch events to your web server that a connection has dropped, (though you might have issues in the case of low network connectivity, or an internet drop). I would suggest going against this though, as in my experience, FMS sucks :)
Is setting extremely low timeouts not a possibility? (i.e. < 10 seconds)
Related
Is there a way to stream a video and audio on a website just to the clients, using a camera installed on the server - for instance, like youtube does ?
I've started reading webrtc, but if I use webrtc I should create a stun/turn server and other things, which for one way stream I think is not necessary (this is just my understanding of the things..) because I don't need anything from the clients, literally, neither their video, or audio..
So is there a way to achieve this using html5, streaming just in one direction:
server (camera) -> clients
Is there something about this out there, or should I stick with webrtc ?
I'm going to explain a possible solution for this scenario, there might be others, but I hope mine gives you a rough idea of how you could do it and a start point to explore more about the amazing possibilities of WebRTC. Please let me know if something is not understood.
So, WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. Sweet, that is: WebRTC has a quite good browser support (not in every browser though, Safari just started supporting it a month ago with Safari 11). But in this case we want to use WebRTC in the server side. At the end of the day we can still think about peer-to-peer real time communication, where one of our peers is the server.
I don't know if you are familiar with Node.js, but I recommend you to write your Server app with it (<3 Javascript!):
There are a few libraries that wrap WebRTC functionality to be used in the server side, like node-webrtc and node-rtc-peer-connection.
But I recommend you to to take a look at electron-werbrtc, since
the others might be using deprecated methods or be incomplete.
electron-webrtc runs a headless Electron client in the background to
use Chromium's built-in WebRTC implementation. So with it you should
be able to access the Camera in your server and create a stream to be
served to the other peer (the browser).
All above would be the WebRTC related tasks, in this case: streaming video peer(server)-to-peer(browser).
Now, let's talk about signaling process, stun and turn.
Signaling: imagine now a scenario peer-to-peer with 2 browsers, they want to establish a direct connection and stream video and audio between each other. But they don't know each other, like if I don't know your home address, I can't send you a letter. So they need a service that helps them know each other, so they can have the other's IP. This should be done by what is called "a signaling server". If somehow you know the other peer IP, you wouldn't need a signaling server.
STUN/TURN: the scheme above works perfectly in a local area network where each peer has its own IP address and there are no firewalls and routers between them. But otherwise, you can have peers behind a NAT or firewalls, and then your signaling server won't be able to make both peers to discover themselves. If you have peers behind a NAT, you'll need a STUN server, and if you have peers behind firewalls you'll need a TURN server. This is a bit simplified, but I just want you to have the general picture of when you might need STUN/TURN servers.
To better understand Signaling, STUN and TURN, there is a very graphic article that explains them perfectly.
Now, for your scenario:
I think you prob don't need STUN/TURN servers and also you prob don't need to implement the signaling process, because the browsers that are supposed to receive the stream from the server will know that server address, right? So they can establish a WebRTC connection with it.
EDIT: it is likely that you will need to implement some sort of handshake between the server and the clients (browsers), so this will be the signaling process. This is not part of WebRTC and this is why you need to implement it yourself. As I said, it is the way 2 peers can discover each other, but they also exchange information as their local media conditions, like codecs, resolutions they can handle, etc. For your case, your signaling server could be hosted in the same server you use to strea: you can build a small node.js app that runs there and that manages all the signaling process easily, it is not a big deal. I recommend you to read this article, and specially the section "How can I build a signaling service?". In general all WebRTC articles from that site are very helpful.
Does this make sense to you? I think with it you can start digging a little bit more and see if with this is enough or you need to implement more stuff. Hope it helps!
I need my client to check if the user connection with my server is strong or weak or etc, so i searched a lot and i think using URLLoader may be the answer:
I need to ping to an network with flash or actionscript
Is there a better way or this should do it?
The only thing using a URLLoader will tell you is if the device has a connection to that server or not. There is no in between, as you seem to desire. Additionally, if the requested server is down or slow (and Flash times out), your app will think it has no connection to the internet at all.
If you are using Adobe AIR, I suggest looking for an AIR Native Extension (commonly abbreviated to ANE) that does this. FreshPlanet has one called "ANE-Network-Info" that provides a way for iOS and Android apps to read if they have a network connection or not, though no way to get signal strength. Do some searching and you will probably find one, at least for iOS and/or Android.
If this isn't an AIR app, there's not much you can do. I suggest you treat it as any other website. If the connection drops or is weak, that's the problem of the client, not yours. If a request to a server fails, alert them. Beyond that, I don't think there is much else you can do.
I've been writing an extension that allows the user to issue voice commands to control their browser, and things were going great until I hit a catastrophic problem. It goes like this:
The speech recognition object is in continuous mode, and whenever the onerror: 'no-speech' or onend events fire, it restarts. This way, the extension is constantly waiting to accept input and reacts whenever a command is issued, even after 5 minutes of silence.
After a few days of of development, today I reached the point where I was testing it in practical use, and I found that after a little while (and with no change to anything on my part), my onend event started firing constantly. As in, looking at the console, I would see 18,000 requests being made in the space of three seconds, all being instantly denied, thus triggering onend and restarting the request.
I'm aware that it would be optimal to wait for sound before sending a request, or to have local speech recognition capabilities without the need for a remote server, but the present API does not allow that.
Are my suspicions correct? Am I getting request limited?
Are my suspicions correct? Am I getting request limited?
Yes
I'm aware that it would be optimal to wait for sound before sending a request, or to have local speech recognition capabilities without the need for a remote server, but the present API does not allow that.
To hide the IP source of your request you can use anonymizer networks like Tor, though it will not be fast.
It's naive to assume Google will spend resources to process all audio being recorded on your system. In your application development it is better to rely on API which provides at least some guarantees. It could be either commercial API or open source implementation like CMUSphinx.
With CMUSphinx, you can also properly implement command keyword detection and increase accuracy by specifying the grammar of the commands.
You could also use a Voice Activity Detection (VAD) algorithm to detect when a user is talking. This can be done by either setting a volume threshold or a frequency threshold (Human speech is usually less than 400hz for example). This way, you won't send useless requests to Google unless those conditions are meant. I would not recommend using Tor as this would significantly increase latency. CMUSphinx is probably the best local system option, but if still want to use a web-based service, I would recommend either using a Voice Activity Detection algorithm or finding a different web-based software.
I built simple server-client application (windows) using Adobe AIR, based on UDP protocol. What I want to achieve is to test how my application works under network disturbances (latency, packet loss, packet reordering) on a SINGLE PC.
There is plenty of programs for network disturbance simulation, but it looks like they're all made to simulate network disturbances between two PCs, which is not what I need.
If you are using windows, it's not quite possible to create some delay in localhost latency. I came up to this issue this winter and that's how I solved a problem.
All latency logic will be in your AS3 code. On the moment you receive some data (socket data progress event), you create a new Timer with desired delay (or use an existing one) and pass received socket data with the Timer.COMPLETE event. When your timer fires, you can use it's data like you'd normally do without it - you call some needed functions, you process it and do whatever you need. You can also use setTimeout instead of Timer, it doesn't really matter. You can also add some random packet loss by not creating a Timer, so no data would pass through. And you can also use random Timer time so some packets will be reordered.
I won't write any code because the implementation really depends on what you already have now. But I hope this little hint will help you :)
I have a RAP application which we deploy into a Tomcat instance. The application does some additional stuff during it's first startup.
Currently when the first user opens the webpage in a Browser, it takes quite a while until the application is ready because of this one-time initialization work.
This is bad for usability as the first user needs to wait a long time until this startup-work is done.
Is there a way to trigger or simulate a first session after the Tomcat is started so we can warmup the application and the first user receives feedback quickly?
I tried to do some simple URL-requests via URLConnection to simulate a browser, but it seems the protocol to trigger a new session is non-trivial.
I also tried to use HtmlUnit to request the page with JavaScript enabled, this works to some degree, but HtmlUnit is quite heavy for this simple step.
So is there an official API or at least some sort of workaround that allows me to pre-start and initialize the application?
Unless this initialization requires a UI session (i.e. a user), the configure method of your ApplicationConfiguration could be a suitable place. However, at this point, the ApplicationContext has not been completely set up, so it could be too early. Also, if your application is based on the workbench and extension points, you won't have an ApplicationConfiguration of your own.
Would you mind opening a bug report (http://eclipse.org/rap/bugs) and describe your use case? I think we should provide some kind of hook for applications to setup and clean up, e.g. an ApplicationContextListener?