Browser chat application freezing / slowing down - google-chrome

Our system has 3 components Agent side chat application, a server, and players chat.
Agents and players connect to the server via a web-socket connection.
We have made a chat application for the agents that run on the browser. We have used plain javascript, html5, CSS to build this application. The UI has a few sections, for instance, a scrolling box where they can see all the players, a tab box that holds the players whom an agent is talking to, a chat box to text to players, etc.
When an agent clicks on any of the players from the list, the player moves to the tab box, where the agent can chat with the player.
At any given point in time, there are around 500 to 700 players online connecting from gaming websites and around 20 agents connecting from the agent chat application. And the agent can hold up to 32 players simultaneously.
After the agent logs into the chat application, a web-socket connection is established with the server and all the communication happens over it. Such as:
Receives the list of all the players online
A player going offline
A player coming online
Agent picking up player etc.
All the communication to the server happens on a single web-socket connection.
Now coming to the problem, the agents complain (usually when the players are a little more than usual) that the UI feels very sluggish, scrolling becomes slow, clicking on the player does not respond for more than 10-15 seconds, all the actions becomes very slow. In normal circumstances, it's very smooth.
My investigation so far:
I tried to monitor its memory consumption, When the agent chat application is launched on the browser it takes around 50MB, and over time I have seen it reach close to 1GB, that's when things were getting very slow. However, I have seen getting sluggish when the occupancy is around 500MB too.
I checked for RAM on the agent's machine, which was around 32, I felt it was sufficient, compared to what was needed for the application.
I am investigating why is the memory consumption getting so high, and how can I limit it.
I am trying to run the application with "disable cache" check to see if the performance gets better (still checking).
I have also tried to repeatedly pick and drop players without messaging, but that hardly caused any memory spike.
Can someone help me with any idea on to solve this problem?

Related

Low latency (< 2s) live video streaming HTML5 solutions?

With Chrome disabling Flash by default very soon I need to start looking into flash/rtmp html5 replacement solutions.
Currently with Flash + RTMP I have a live video stream with < 1-2 second delay.
I've experimented with MPEG-DASH which seems to be the new industry standard for streaming but that came up short with 5 second delay being the best I could squeeze from it.
For context, I am trying to allow user's to control physical objects they can see on the stream, so anything above a couple of seconds of delay leads to a frustrating experience.
Are there any other techniques, or is there really no low latency html5 solutions for live streaming yet?
Technologies and Requirements
The only web-based technology set really geared toward low latency is WebRTC. It's built for video conferencing. Codecs are tuned for low latency over quality. Bitrates are usually variable, opting for a stable connection over quality.
However, you don't necessarily need this low latency optimization for all of your users. In fact, from what I can gather on your requirements, low latency for everyone will hurt the user experience. While your users in control of the robot definitely need low latency video so they can reasonably control it, the users not in control don't have this requirement and can instead opt for reliable higher quality video.
How to Set it Up
In-Control Users to Robot Connection
Users controlling the robot will load a page that utilizes some WebRTC components for connecting to the camera and control server. To facilitate WebRTC connections, you need some sort of STUN server. To get around NAT and other firewall restrictions, you may need a TURN server. Both of these are usually built into Node.js-based WebRTC frameworks.
The cam/control server will also need to connect via WebRTC. Honestly, the easiest way to do this is to make your controlling application somewhat web based. Since you're using Node.js already, check out NW.js or Electron. Both can take advantage of the WebRTC capabilities already built in WebKit, while still giving you the flexibility to do whatever you'd like with Node.js.
The in-control users and the cam/control server will make a peer-to-peer connection via WebRTC (or TURN server if required). From there, you'll want to open up a media channel as well as a data channel. The data side can be used to send your robot commands. The media channel will of course be used for the low latency video stream being sent back to the in-control users.
Again, it's important to note that the video that will be sent back will be optimized for latency, not quality. This sort of connection also ensures a fast response to your commands.
Video for Viewing Users
Users that are simply viewing the stream and not controlling the robot can use normal video distribution methods. It is actually very important for you to use an existing CDN and transcoding services, since you will have 10k-15k people watching the stream. With that many users, you're probably going to want your video in a couple different codecs, and certainly a whole array of bitrates. Distribution with DASH or HLS is easiest to work with at the moment, and frees you of Flash requirements.
You will probably also want to send your stream to social media services. This is another reason why it's important to start with a high quality HD stream. Those services will transcode your video again, reducing quality. If you start with good quality first, you'll end up with better quality in the end.
Metadata (chat, control signals, etc.)
It isn't clear from your requirements what sort of metadata you need, but for small message-based data, you can use a web socket library, such as Socket.IO. As you scale this up to a few instances, you can use pub/sub, such as Redis, to distribution messaging throughout the servers.
To synchronize the metadata to the video depends a bit on what's in that metadata and what the synchronization requirement is, specifically. Generally speaking, you can assume that there will be a reasonable but unpredictable delay between the source video and the clients. After all, you cannot control how long they will buffer. Each device is different, each connection variable. What you can assume is that playback will begin with the first segment the client downloads. In other words, if a client starts buffering a video and begins playing it 2 seconds later, the video is 2 seconds behind from when the first request was made.
Detecting when playback actually begins client-side is possible. Since the server knows the timestamp for which video was sent to the client, it can inform the client of its offset relative to the beginning of video playback. Since you'll probably be using DASH or HLS and you need to use MCE with AJAX to get the data anyway, you can use the response headers in the segment response to indicate the timestamp for the beginning the segment. The client can then synchronize itself. Let me break this down step-by-step:
Client starts receiving metadata messages from application server.
Client requests the first video segment from the CDN.
CDN server replies with video segment. In the response headers, the Date: header can indicate the exact date/time for the start of the segment.
Client reads the response Date: header (let's say 2016-06-01 20:31:00). Client continues buffering the segments.
Client starts buffering/playback as normal.
Playback starts. Client can detect this state change on the player and knows that 00:00:00 on the video player is actualy 2016-06-01 20:31:00.
Client displays metadata synchronized with the video, dropping any messages from previous times and buffering any for future times.
This should meet your needs and give you the flexibility to do whatever you need to with your video going forward.
Why not [magic-technology-here]?
When you choose low latency, you lose quality. Quality comes from available bandwidth. Bandwidth efficiency comes from being able to buffer and optimize entire sequences of images when encoding. If you wanted perfect quality (lossless for each image) you would need a ton (gigabites per viewer) of bandwidth. That's why we have these lossy codecs to begin with.
Since you don't actually need low latency for most of your viewers, it's better to optimize for quality for them.
For the 2 users out of 15,000 that do need low latency, we can optimize for low latency for them. They will get substandard video quality, but will be able to actively control a robot, which is awesome!
Always remember that the internet is a hostile place where nothing works quite as well as it should. System resources and bandwidth are constantly variable. That's actually why WebRTC auto-adjusts (as best as reasonable) to changing conditions.
Not all connections can keep up with low latency requirements. That's why every single low latency connection will experience drop-outs. The internet is packet-switched, not circuit-switched. There is no real dedicated bandwidth available.
Having a large buffer (a couple seconds) allows clients to survive momentary losses of connections. It's why CD players with anti-skip buffers were created, and sold very well. It's a far better user experience for those 15,000 users if the video works correctly. They don't have to know that they are 5-10 seconds behind the main stream, but they will definitely know if the video drops out every other second.
There are tradeoffs in every approach. I think what I have outlined here separates the concerns and gives you the best tradeoffs in each area. Please feel free to ask for clarification or ask follow-up questions in the comments.

WP8 Uploading/Downloading large files

I am fairly new to Windows Phone development. We have a scenario where we allow user to upload or download files but along with authentication (oAuth, NTLM, forms all standard mechanism but not limited to oAuth).
Now so far our RnD suggest that we have following options
1- Resource Intensive Agent
The constraints associated with Resource Intensive (like Minimum battery etc.) have lead us to drop this option
2- Periodic Agent
A relatively better option, however as they run after 30 minutes and the constraint of 10 minutes duration gives us doubt that on mobile if user wants to upload a video of say 1-2 GB, it does not guarantee competition and u can anticipate other problems associated with this approach.
3- Background File Transfer
This is the best option in our scenario however my colleague told me that it does not support basic windows authentication and that we cannot change user-agent etc.
4- On Application
Another option is to perform network operation on application but we cant retain user on application for longer duration and also after sometime lock screen would appear. So...
Can anyone who have experienced similar scenario or from product team can guide here. It's a common scenario, are we missing something here? or is it really API limitation?
Resource Intensive Agents will indeed not work for your use case because they require external power to work. Not to mention that if the user receives a phone call the agent terminates.
Periodic Agent Have a 25 second limited duration, not 10 minutes (10 minutes are in resource intensive agents), so they are really no an option if you need to upload a gigabyte of information.
Background File Transfers have a hard limit of 100 megabytes. (It's even less on cellular internet).
On Application is a very possible option, you can prevent the phone from going to lock screen if that's a problem. The bigger issue here is that the user is pretty much stuck for the duration of the upload. More importantly, this seems to be your only option out of the four you mentioned.

Windows Phone 8 - Keeping background location tracking active beyond four hours

I'm in the process of developing a WP8 app that makes use of the background location tracking abilities provided by the OS. The idea is to monitor the users position and to notify them when they are near certain types of places.
So far it all seems to work fine and when running the location tracking works as I would expect.
The problem is, it seems that the phone times out background apps after around four hours, stopping the location tracking.
I can understand why Microsoft did it, to preserve battery life etc. But there's not much point having a background location tracking app that has to be manually restarted every four hours! If a user chooses to run this app and is made aware of the potential battery hit, surely it should be able to run indefinitely - to a point of course, if the system runs out of resources or similar then that's fair enough.
Does anyone have any experience with this? There must be hundreds of others apps in the store that have run into this issue I would have thought? And presumably there must be some way of keeping the location tracking running?
I've tried periodically updating the live tile (using a DispatcherTimer) while the tracking is running but this doesn't seem to be enough to keep the app alive either :(
Anyone have any ideas?
Thanks.
There is no way to achieve your desired behavior. The app will be deactivated under anye of following conditions:
The app stops actively tracking location. An app stops tracking location by removing event handlers for the PositionChanged and StatusChanged events of the Geolocator class or by calling the Stop() method of the GeoCoordinateWatcher class.
The app has run in the background for 4 hours without user interaction.
Battery Saver is active.
Device memory is low.
The user disables Location Services on the phone.
Another app begins running in the background.
Source: Running location-tracking apps in the background for Windows Phone 8
What you could do is to show a toast notification before app is deactivated advising the user, and make him navigate back to the app, extending the period for other 4 hours that way.
There is no way to keep it running without any user interaction.

Alternative for Background Transfer Service to run uploads in background

I've used background transfer service (BTS) API for Windows Phone in two apps and experienced very bad problems. It became one of the main source of bug in the two apps as for some reasons, download are often refusing to start, whatever I set in the flags (Connected to wifi, not connected, connected to a power outlet, etc.), and it was random from a user to another. This and bad response from the servers.
Is there a more customized way to achieve it? Which threads or loop remains alive in my app when I'm navigating to the external:// world? I should probably check with counters.
My main question remains: appart from the BTS, is there something to allow a 3-4 megs file to upload even if I navigate out from my app to play an mp3 from an external:// app?
Once you exit your app, you are pretty much shut down. You can masquerade as a location tracking background agent to remain in the background when you get deactivated, though you'll suck battery and I believe there can only be one of these active at a time. Generally, highly not recommended (and you'll probably fail certification).
A better way to do this if BTS is not to your liking is to use a ResourceIntensiveTask. This will only be triggered when the user is plugged in and has WiFi but will allow you to run whatever you want for as long as the conditions are met (for example, at night) which should be plenty of time to upload a 3-4 MB file.

Shared HTML5 offline cache within a local network?

Ok, so I know that HTML in itself isn't done yet, and I've done my fair share of reading for HTML5's offline modes.
Here's the question:
Can I set up an offline app in such a way that the entire system works offline, and SHARES a cache (or an XML repository, or a SQL-Lite DB or something) with other clients in the SAME network?
For example, my system runs on clients that need to share information with each other within a local network, but its fully web based. In case the local network's router dies, how can these clients continue to communicate with one another?
=== END ===
NOTE: If you're still not clear, I'd recommend you read on. The information below is to further clarify what I want.
In case you're still reading, here's a detailed example:
4 people in a restaurant are using a web based ordering system. They each have an iPod Touch (lol) which is connected to the internet via Wifi. Each member logs in to the system under a shared account, which allows them to share information. The cook is also connected, but uses a mounted iPad (lolz) in the kitchen.
When a waiter records an order, the data is stored in a DB, and AJAX is used to constantly refresh the Cook's screen, so he is notified instantly.
Assume, Zeus struck down the electricity in the restaurant.
Now, there's no internet connection, but all devices in question still function thanks to their inherent battery-oriented nature.
The web app switches to offline mode, and utilizes cached menus and screens.
BUT!
How does the offline system share information between client devices? How does the iPod Touch #3 tell the Cook's iPad - "Hey there, this is order #5352"?
The most obvious thought is a shared cache or something...
Ideas?
That is not possible. WebPages cannot communicate without a server.
The only thing you could do is setting up a local server for the case that the server on the internet is offline or not reachable.